diff mbox series

[ANNOUNCE] v5.4.24-rt15

Message ID 20200306223734.65nlltijm3cxbejz@linutronix.de
State New
Headers show
Series [ANNOUNCE] v5.4.24-rt15 | expand

Commit Message

Sebastian Andrzej Siewior March 6, 2020, 10:37 p.m. UTC
Dear RT folks!

I'm pleased to announce the v5.4.24-rt15 patch set. 

Changes since v5.4.24-rt14:

  - A warning in kmalloc() has been added. On RT with
    CONFIG_DEBUG_ATOMIC_SLEEP enabled it triggers on memory allocations
    in atomic context.

Known issues
     - It has been pointed out that due to changes to the printk code the
       internal buffer representation changed. This is only an issue if tools
       like `crash' are used to extract the printk buffer from a kernel memory
       image.

The delta patch against v5.4.24-rt14 is appended below and can be found here:
 
     https://cdn.kernel.org/pub/linux/kernel/projects/rt/5.4/incr/patch-5.4.24-rt14-rt15.patch.xz

You can get this release via the git tree at:

    git://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git v5.4.24-rt15

The RT patch against v5.4.24 can be found here:

    https://cdn.kernel.org/pub/linux/kernel/projects/rt/5.4/older/patch-5.4.24-rt15.patch.xz

The split quilt queue is available at:

    https://cdn.kernel.org/pub/linux/kernel/projects/rt/5.4/older/patches-5.4.24-rt15.tar.xz

Sebastian
diff mbox series

Patch

diff --git a/localversion-rt b/localversion-rt
index 08b3e75841adc..18777ec0c27d4 100644
--- a/localversion-rt
+++ b/localversion-rt
@@ -1 +1 @@ 
--rt14
+-rt15
diff --git a/mm/slub.c b/mm/slub.c
index 2cff48d13e3a7..7b2773c45e1ff 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2742,6 +2742,9 @@  static __always_inline void *slab_alloc_node(struct kmem_cache *s,
 	struct page *page;
 	unsigned long tid;
 
+	if (IS_ENABLED(CONFIG_PREEMPT_RT) && IS_ENABLED(CONFIG_DEBUG_ATOMIC_SLEEP))
+		WARN_ON_ONCE(!preemptible() && system_state >= SYSTEM_SCHEDULING);
+
 	s = slab_pre_alloc_hook(s, gfpflags);
 	if (!s)
 		return NULL;
@@ -3202,6 +3205,9 @@  int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
 	LIST_HEAD(to_free);
 	int i;
 
+	if (IS_ENABLED(CONFIG_PREEMPT_RT) && IS_ENABLED(CONFIG_DEBUG_ATOMIC_SLEEP))
+		WARN_ON_ONCE(!preemptible() && system_state >= SYSTEM_SCHEDULING);
+
 	/* memcg and kmem_cache debug support */
 	s = slab_pre_alloc_hook(s, flags);
 	if (unlikely(!s))