From patchwork Thu Nov 3 07:55:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Wagner X-Patchwork-Id: 622564 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id F032FC433FE for ; Thu, 3 Nov 2022 07:56:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231459AbiKCH4i (ORCPT ); Thu, 3 Nov 2022 03:56:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44500 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230106AbiKCH4S (ORCPT ); Thu, 3 Nov 2022 03:56:18 -0400 Received: from mail.nearlyone.de (mail.nearlyone.de [46.163.114.145]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D7DD7643C; Thu, 3 Nov 2022 00:55:53 -0700 (PDT) Received: from [127.0.0.1] (localhost [127.0.0.1]) by localhost (Mailerdaemon) with ESMTPSA id 19B6C61E20; Thu, 3 Nov 2022 08:55:51 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monom.org; s=dkim; t=1667462151; h=from:subject:date:message-id:to:cc:mime-version: content-transfer-encoding:in-reply-to:references; bh=jgiZ76hjZGwQyFp2pMmSV5dvCWpl+uU1tutNnDKJgL8=; b=T0qUswBSZskxLY9E/HLspktcWimDaLmaC3Oa/jt1KdwUxJVUQVy9HlzXzSadMMGO6aOyk4 izUf6+xSFVMyZl+5t9rofs+woAcO+jyKSVIpBcv9TvFxoaX7CMKVSaqfRz0qAsianrrWFU b4t0AwEBgGWj4qAg3ps5kqMNnVPIZBXiX7ucwZ2yUj6cHOFFlF83EV+JQfXwW/CcO6ZmYv mWAgMvBIVkat+TjGEUAxiYuL1AypFAElZx+b4lxxgnke2WDnT19qiyMwY3z7+ECTu9Z999 FKSREl5W6Ux1/MGI3c4cCzDmoI9h1hwBrK8K6mStEql1IOHS5seP/meyFabyPw== From: Daniel Wagner To: LKML , linux-rt-users , Steven Rostedt , Thomas Gleixner , Carsten Emde , John Kacur , Sebastian Andrzej Siewior , Tom Zanussi , Clark Williams , Pavel Machek Cc: Anna-Maria Gleixner , Peter Zijlstra , Daniel Wagner Subject: [PATCH RT 1/4] timers: Prepare support for PREEMPT_RT Date: Thu, 3 Nov 2022 08:55:45 +0100 Message-Id: <20221103075548.6477-2-wagi@monom.org> In-Reply-To: <20221103075548.6477-1-wagi@monom.org> References: <20221103075548.6477-1-wagi@monom.org> MIME-Version: 1.0 X-Last-TLS-Session-Version: TLSv1.3 Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org From: Anna-Maria Gleixner v4.19.255-rt114-rc2 stable review patch. If anyone has any objections, please let me know. ----------- Upstream commit 030dcdd197d77374879bb5603d091eee7d8aba80 When PREEMPT_RT is enabled, the soft interrupt thread can be preempted. If the soft interrupt thread is preempted in the middle of a timer callback, then calling del_timer_sync() can lead to two issues: - If the caller is on a remote CPU then it has to spin wait for the timer handler to complete. This can result in unbound priority inversion. - If the caller originates from the task which preempted the timer handler on the same CPU, then spin waiting for the timer handler to complete is never going to end. To avoid these issues, add a new lock to the timer base which is held around the execution of the timer callbacks. If del_timer_sync() detects that the timer callback is currently running, it blocks on the expiry lock. When the callback is finished, the expiry lock is dropped by the softirq thread which wakes up the waiter and the system makes progress. This addresses both the priority inversion and the life lock issues. This mechanism is not used for timers which are marked IRQSAFE as for those preemption is disabled accross the callback and therefore this situation cannot happen. The callbacks for such timers need to be individually audited for RT compliance. The same issue can happen in virtual machines when the vCPU which runs a timer callback is scheduled out. If a second vCPU of the same guest calls del_timer_sync() it will spin wait for the other vCPU to be scheduled back in. The expiry lock mechanism would avoid that. It'd be trivial to enable this when paravirt spinlocks are enabled in a guest, but it's not clear whether this is an actual problem in the wild, so for now it's an RT only mechanism. As the softirq thread can be preempted with PREEMPT_RT=y, the SMP variant of del_timer_sync() needs to be used on UP as well. [ tglx: Refactored it for mainline ] Signed-off-by: Anna-Maria Gleixner Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Thomas Gleixner Acked-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20190726185753.832418500@linutronix.de Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Daniel Wagner --- kernel/time/timer.c | 130 ++++++++++++++++++++++++++++++-------------- 1 file changed, 88 insertions(+), 42 deletions(-) diff --git a/kernel/time/timer.c b/kernel/time/timer.c index 0a6d60b3e67c..b859ecf6424b 100644 --- a/kernel/time/timer.c +++ b/kernel/time/timer.c @@ -198,7 +198,10 @@ EXPORT_SYMBOL(jiffies_64); struct timer_base { raw_spinlock_t lock; struct timer_list *running_timer; +#ifdef CONFIG_PREEMPT_RT spinlock_t expiry_lock; + atomic_t timer_waiters; +#endif unsigned long clk; unsigned long next_expiry; unsigned int cpu; @@ -1227,8 +1230,14 @@ int del_timer(struct timer_list *timer) } EXPORT_SYMBOL(del_timer); -static int __try_to_del_timer_sync(struct timer_list *timer, - struct timer_base **basep) +/** + * try_to_del_timer_sync - Try to deactivate a timer + * @timer: timer to delete + * + * This function tries to deactivate a timer. Upon successful (ret >= 0) + * exit the timer is not queued and the handler is not running on any CPU. + */ +int try_to_del_timer_sync(struct timer_list *timer) { struct timer_base *base; unsigned long flags; @@ -1236,7 +1245,7 @@ static int __try_to_del_timer_sync(struct timer_list *timer, debug_assert_init(timer); - *basep = base = lock_timer_base(timer, &flags); + base = lock_timer_base(timer, &flags); if (base->running_timer != timer) ret = detach_if_pending(timer, base, true); @@ -1245,45 +1254,80 @@ static int __try_to_del_timer_sync(struct timer_list *timer, return ret; } +EXPORT_SYMBOL(try_to_del_timer_sync); -/** - * try_to_del_timer_sync - Try to deactivate a timer - * @timer: timer to delete - * - * This function tries to deactivate a timer. Upon successful (ret >= 0) - * exit the timer is not queued and the handler is not running on any CPU. - */ -int try_to_del_timer_sync(struct timer_list *timer) +#ifdef CONFIG_PREEMPT_RT +static __init void timer_base_init_expiry_lock(struct timer_base *base) { - struct timer_base *base; + spin_lock_init(&base->expiry_lock); +} - return __try_to_del_timer_sync(timer, &base); +static inline void timer_base_lock_expiry(struct timer_base *base) +{ + spin_lock(&base->expiry_lock); } -EXPORT_SYMBOL(try_to_del_timer_sync); -#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT_FULL) -static int __del_timer_sync(struct timer_list *timer) +static inline void timer_base_unlock_expiry(struct timer_base *base) { - struct timer_base *base; - int ret; + spin_unlock(&base->expiry_lock); +} - for (;;) { - ret = __try_to_del_timer_sync(timer, &base); - if (ret >= 0) - return ret; +/* + * The counterpart to del_timer_wait_running(). + * + * If there is a waiter for base->expiry_lock, then it was waiting for the + * timer callback to finish. Drop expiry_lock and reaquire it. That allows + * the waiter to acquire the lock and make progress. + */ +static void timer_sync_wait_running(struct timer_base *base) +{ + if (atomic_read(&base->timer_waiters)) { + spin_unlock(&base->expiry_lock); + spin_lock(&base->expiry_lock); + } +} - if (READ_ONCE(timer->flags) & TIMER_IRQSAFE) - continue; +/* + * This function is called on PREEMPT_RT kernels when the fast path + * deletion of a timer failed because the timer callback function was + * running. + * + * This prevents priority inversion, if the softirq thread on a remote CPU + * got preempted, and it prevents a life lock when the task which tries to + * delete a timer preempted the softirq thread running the timer callback + * function. + */ +static void del_timer_wait_running(struct timer_list *timer) +{ + u32 tf; + + tf = READ_ONCE(timer->flags); + if (!(tf & TIMER_MIGRATING)) { + struct timer_base *base = get_timer_base(tf); /* - * When accessing the lock, timers of base are no longer expired - * and so timer is no longer running. + * Mark the base as contended and grab the expiry lock, + * which is held by the softirq across the timer + * callback. Drop the lock immediately so the softirq can + * expire the next timer. In theory the timer could already + * be running again, but that's more than unlikely and just + * causes another wait loop. */ - spin_lock(&base->expiry_lock); - spin_unlock(&base->expiry_lock); + atomic_inc(&base->timer_waiters); + spin_lock_bh(&base->expiry_lock); + atomic_dec(&base->timer_waiters); + spin_unlock_bh(&base->expiry_lock); } } +#else +static inline void timer_base_init_expiry_lock(struct timer_base *base) { } +static inline void timer_base_lock_expiry(struct timer_base *base) { } +static inline void timer_base_unlock_expiry(struct timer_base *base) { } +static inline void timer_sync_wait_running(struct timer_base *base) { } +static inline void del_timer_wait_running(struct timer_list *timer) { } +#endif +#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT_RT_FULL) /** * del_timer_sync - deactivate a timer and wait for the handler to finish. * @timer: the timer to be deactivated @@ -1322,6 +1366,8 @@ static int __del_timer_sync(struct timer_list *timer) */ int del_timer_sync(struct timer_list *timer) { + int ret; + #ifdef CONFIG_LOCKDEP unsigned long flags; @@ -1339,14 +1385,17 @@ int del_timer_sync(struct timer_list *timer) * could lead to deadlock. */ WARN_ON(in_irq() && !(timer->flags & TIMER_IRQSAFE)); - /* - * Must be able to sleep on PREEMPT_RT because of the slowpath in - * __del_timer_sync(). - */ - if (IS_ENABLED(CONFIG_PREEMPT_RT) && !(timer->flags & TIMER_IRQSAFE)) - might_sleep(); - return __del_timer_sync(timer); + do { + ret = try_to_del_timer_sync(timer); + + if (unlikely(ret < 0)) { + del_timer_wait_running(timer); + cpu_relax(); + } + } while (ret < 0); + + return ret; } EXPORT_SYMBOL(del_timer_sync); #endif @@ -1410,15 +1459,12 @@ static void expire_timers(struct timer_base *base, struct hlist_head *head) raw_spin_unlock(&base->lock); call_timer_fn(timer, fn); base->running_timer = NULL; - spin_unlock(&base->expiry_lock); - spin_lock(&base->expiry_lock); raw_spin_lock(&base->lock); } else { raw_spin_unlock_irq(&base->lock); call_timer_fn(timer, fn); base->running_timer = NULL; - spin_unlock(&base->expiry_lock); - spin_lock(&base->expiry_lock); + timer_sync_wait_running(base); raw_spin_lock_irq(&base->lock); } } @@ -1715,7 +1761,7 @@ static inline void __run_timers(struct timer_base *base) if (!time_after_eq(jiffies, base->clk)) return; - spin_lock(&base->expiry_lock); + timer_base_lock_expiry(base); raw_spin_lock_irq(&base->lock); /* @@ -1743,7 +1789,7 @@ static inline void __run_timers(struct timer_base *base) expire_timers(base, heads + levels); } raw_spin_unlock_irq(&base->lock); - spin_unlock(&base->expiry_lock); + timer_base_unlock_expiry(base); } /* @@ -1990,7 +2036,7 @@ static void __init init_timer_cpu(int cpu) base->cpu = cpu; raw_spin_lock_init(&base->lock); base->clk = jiffies; - spin_lock_init(&base->expiry_lock); + timer_base_init_expiry_lock(base); } } From patchwork Thu Nov 3 07:55:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Wagner X-Patchwork-Id: 621314 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0969C4332F for ; Thu, 3 Nov 2022 07:56:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229771AbiKCH4n (ORCPT ); Thu, 3 Nov 2022 03:56:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44690 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231514AbiKCH4S (ORCPT ); Thu, 3 Nov 2022 03:56:18 -0400 Received: from mail.nearlyone.de (mail.nearlyone.de [46.163.114.145]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0A4CDAE4E; Thu, 3 Nov 2022 00:55:54 -0700 (PDT) Received: from [127.0.0.1] (localhost [127.0.0.1]) by localhost (Mailerdaemon) with ESMTPSA id E204261E1F; Thu, 3 Nov 2022 08:55:52 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monom.org; s=dkim; t=1667462153; h=from:subject:date:message-id:to:cc:mime-version: content-transfer-encoding:in-reply-to:references; bh=5LucOqU2x9EBByrXZ+ibDrA1kTOF1CUG460+GIlO/2E=; b=In7q2bsUyStHQIy7MjdnWi169beCyyeRVrKjb+nbN+jyP7QYFDtKqanyFYVYOHrl3CW4Y9 8esQfBTuTIE/4KMOPEVM1rjzPHZZMLHXclWo2TDQEQTagQy6HtvF8+1Be3xAu5ZzxtKtp3 vUbrA8Bu0IUdGbzxD7bx0zwxEnqxHdkh7eeSp1zmDbanq0+P39En6gTQqWsCq54hgvXAQL lVXu7ubfs4tf5CRjtdZblDIkRypbm3eVJNnuLhmOtSvpL8EfT9pwBg2o2CAkYthF3ZDa7s jj9JEurMdA8uYdl3vSMJ05+0fuYYuWF815pOn5tnywQXq10iCyTfpTSS/o5dlA== From: Daniel Wagner To: LKML , linux-rt-users , Steven Rostedt , Thomas Gleixner , Carsten Emde , John Kacur , Sebastian Andrzej Siewior , Tom Zanussi , Clark Williams , Pavel Machek Cc: syzbot+aa7c2385d46c5eba0b89@syzkaller.appspotmail.com, syzbot+abea4558531bae1ba9fe@syzkaller.appspotmail.com, stable@vger.kernel.org, Daniel Wagner Subject: [PATCH RT 2/4] timers: Move clearing of base::timer_running under base:: Lock Date: Thu, 3 Nov 2022 08:55:46 +0100 Message-Id: <20221103075548.6477-3-wagi@monom.org> In-Reply-To: <20221103075548.6477-1-wagi@monom.org> References: <20221103075548.6477-1-wagi@monom.org> MIME-Version: 1.0 X-Last-TLS-Session-Version: TLSv1.3 Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org From: Thomas Gleixner v4.19.255-rt114-rc2 stable review patch. If anyone has any objections, please let me know. ----------- Upstream commit bb7262b295472eb6858b5c49893954794027cd84 syzbot reported KCSAN data races vs. timer_base::timer_running being set to NULL without holding base::lock in expire_timers(). This looks innocent and most reads are clearly not problematic, but Frederic identified an issue which is: int data = 0; void timer_func(struct timer_list *t) { data = 1; } CPU 0 CPU 1 ------------------------------ -------------------------- base = lock_timer_base(timer, &flags); raw_spin_unlock(&base->lock); if (base->running_timer != timer) call_timer_fn(timer, fn, baseclk); ret = detach_if_pending(timer, base, true); base->running_timer = NULL; raw_spin_unlock_irqrestore(&base->lock, flags); raw_spin_lock(&base->lock); x = data; If the timer has previously executed on CPU 1 and then CPU 0 can observe base->running_timer == NULL and returns, assuming the timer has completed, but it's not guaranteed on all architectures. The comment for del_timer_sync() makes that guarantee. Moving the assignment under base->lock prevents this. For non-RT kernel it's performance wise completely irrelevant whether the store happens before or after taking the lock. For an RT kernel moving the store under the lock requires an extra unlock/lock pair in the case that there is a waiter for the timer, but that's not the end of the world. Reported-by: syzbot+aa7c2385d46c5eba0b89@syzkaller.appspotmail.com Reported-by: syzbot+abea4558531bae1ba9fe@syzkaller.appspotmail.com Fixes: 030dcdd197d7 ("timers: Prepare support for PREEMPT_RT") Signed-off-by: Thomas Gleixner Tested-by: Sebastian Andrzej Siewior Link: https://lore.kernel.org/r/87lfea7gw8.fsf@nanos.tec.linutronix.de Cc: stable@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Daniel Wagner --- kernel/time/timer.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/kernel/time/timer.c b/kernel/time/timer.c index b859ecf6424b..603985720f54 100644 --- a/kernel/time/timer.c +++ b/kernel/time/timer.c @@ -1282,8 +1282,10 @@ static inline void timer_base_unlock_expiry(struct timer_base *base) static void timer_sync_wait_running(struct timer_base *base) { if (atomic_read(&base->timer_waiters)) { + raw_spin_unlock_irq(&base->lock); spin_unlock(&base->expiry_lock); spin_lock(&base->expiry_lock); + raw_spin_lock_irq(&base->lock); } } @@ -1458,14 +1460,14 @@ static void expire_timers(struct timer_base *base, struct hlist_head *head) if (timer->flags & TIMER_IRQSAFE) { raw_spin_unlock(&base->lock); call_timer_fn(timer, fn); - base->running_timer = NULL; raw_spin_lock(&base->lock); + base->running_timer = NULL; } else { raw_spin_unlock_irq(&base->lock); call_timer_fn(timer, fn); + raw_spin_lock_irq(&base->lock); base->running_timer = NULL; timer_sync_wait_running(base); - raw_spin_lock_irq(&base->lock); } } } From patchwork Thu Nov 3 07:55:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Wagner X-Patchwork-Id: 622563 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5775DC433FE for ; Thu, 3 Nov 2022 07:56:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231479AbiKCH4l (ORCPT ); Thu, 3 Nov 2022 03:56:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44688 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231516AbiKCH4S (ORCPT ); Thu, 3 Nov 2022 03:56:18 -0400 Received: from mail.nearlyone.de (mail.nearlyone.de [46.163.114.145]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CAE8C5FEC; Thu, 3 Nov 2022 00:55:55 -0700 (PDT) Received: from [127.0.0.1] (localhost [127.0.0.1]) by localhost (Mailerdaemon) with ESMTPSA id 0AB2861E22; Thu, 3 Nov 2022 08:55:54 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monom.org; s=dkim; t=1667462154; h=from:subject:date:message-id:to:cc:mime-version: content-transfer-encoding:in-reply-to:references; bh=ZxhjYU0O0vxXngLgbF21571m+6DO7DnQnlwU/VKcYWA=; b=l9EmVc+WeSb+UvTMwVrqzYARLhTPxntmczs/zvkZtqfZWpqVzdgGB3/8iajkWfVFM4XiU6 nqPYAqBUXdSuxXK0Wr971eDapJQJdU8KJaaLNCwGS3IV0pEMHKJ4pO8udXx3C+F3kA+qs4 /WfaK1n5RBXWVN3CeSSB+JQNeU9stZfTXI2juTt3Z5ybdrDTFFEIKQL7TlysZM0vC0meJX shpw+bIByCt5toSOT6gHP7f74HCY2D5Sgdzgy3Id/CX3eapDYkNMZwtcui8pwIBYLMnPyY lxYS4KYAX64B83Vh2pJFPkjA/NK9QYHIoMMPa+DFf+BpYq4rYFq9bB+Zp02AGg== From: Daniel Wagner To: LKML , linux-rt-users , Steven Rostedt , Thomas Gleixner , Carsten Emde , John Kacur , Sebastian Andrzej Siewior , Tom Zanussi , Clark Williams , Pavel Machek Cc: Mike Galbraith , Daniel Wagner Subject: [PATCH RT 3/4] timers: Don't block on ->expiry_lock for TIMER_IRQSAFE timers Date: Thu, 3 Nov 2022 08:55:47 +0100 Message-Id: <20221103075548.6477-4-wagi@monom.org> In-Reply-To: <20221103075548.6477-1-wagi@monom.org> References: <20221103075548.6477-1-wagi@monom.org> MIME-Version: 1.0 X-Last-TLS-Session-Version: TLSv1.3 Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org From: Sebastian Andrzej Siewior v4.19.255-rt114-rc2 stable review patch. If anyone has any objections, please let me know. ----------- Upstream commit c725dafc95f1b37027840aaeaa8b7e4e9cd20516 PREEMPT_RT does not spin and wait until a running timer completes its callback but instead it blocks on a sleeping lock to prevent a livelock in the case that the task waiting for the callback completion preempted the callback. This cannot be done for timers flagged with TIMER_IRQSAFE. These timers can be canceled from an interrupt disabled context even on RT kernels. The expiry callback of such timers is invoked with interrupts disabled so there is no need to use the expiry lock mechanism because obviously the callback cannot be preempted even on RT kernels. Do not use the timer_base::expiry_lock mechanism when waiting for a running callback to complete if the timer is flagged with TIMER_IRQSAFE. Also add a lockdep assertion for RT kernels to validate that the expiry lock mechanism is always invoked in preemptible context. [ bigeasy: Dropping that lockdep_assert_preemption_enabled() check in backport ] Reported-by: Mike Galbraith Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Thomas Gleixner Link: https://lore.kernel.org/r/20201103190937.hga67rqhvknki3tp@linutronix.de Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Daniel Wagner --- kernel/time/timer.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/kernel/time/timer.c b/kernel/time/timer.c index 603985720f54..8c7bfcee9609 100644 --- a/kernel/time/timer.c +++ b/kernel/time/timer.c @@ -1304,7 +1304,7 @@ static void del_timer_wait_running(struct timer_list *timer) u32 tf; tf = READ_ONCE(timer->flags); - if (!(tf & TIMER_MIGRATING)) { + if (!(tf & (TIMER_MIGRATING | TIMER_IRQSAFE))) { struct timer_base *base = get_timer_base(tf); /* @@ -1388,6 +1388,15 @@ int del_timer_sync(struct timer_list *timer) */ WARN_ON(in_irq() && !(timer->flags & TIMER_IRQSAFE)); + /* + * Must be able to sleep on PREEMPT_RT because of the slowpath in + * del_timer_wait_running(). + */ +#if 0 + if (IS_ENABLED(CONFIG_PREEMPT_RT) && !(timer->flags & TIMER_IRQSAFE)) + lockdep_assert_preemption_enabled(); +#endif + do { ret = try_to_del_timer_sync(timer); From patchwork Thu Nov 3 07:55:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Wagner X-Patchwork-Id: 622562 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A89DC433FE for ; Thu, 3 Nov 2022 07:56:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231520AbiKCH4n (ORCPT ); Thu, 3 Nov 2022 03:56:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45298 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231523AbiKCH4S (ORCPT ); Thu, 3 Nov 2022 03:56:18 -0400 Received: from mail.nearlyone.de (mail.nearlyone.de [46.163.114.145]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8FDE364D2; Thu, 3 Nov 2022 00:55:56 -0700 (PDT) Received: from [127.0.0.1] (localhost [127.0.0.1]) by localhost (Mailerdaemon) with ESMTPSA id 1B08B61E25; Thu, 3 Nov 2022 08:55:55 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monom.org; s=dkim; t=1667462155; h=from:subject:date:message-id:to:cc:mime-version: content-transfer-encoding:in-reply-to:references; bh=iM7/Cb+nw7/XhNgxvPpgpak6WIxnuT33RDfpMs3D2Hw=; b=Lh3j16fXa8M1UOia5EghSq6e/GqhOBuEZvdJujtSgWe862LE80kf10tGpemtmaB8Xzt2HT 0TcrFqNT5+w0vQPDOG2E6L3/Hr/DIuVOR+r8A7az6xGjJ26j6K1mWg4vvZMZqIYsvrVLhE UVC/BaZXCTcsOP70DH1AuwykzjvvCJK7loXWVT3YR51BS/9VqJOOo5CPheoTVP9DfWgF50 DOoHuLAklkbieBV32/E2DQckDBmxPnms6kGZX8Bpwnh8849Duik+kmyXHFfQc8FOS4CYYQ 6PVcxRxuEUvjO15XybVnWHp08+oxx0KEtDa8qlSOxwImoOivu2mbZOv9dspkSg== From: Daniel Wagner To: LKML , linux-rt-users , Steven Rostedt , Thomas Gleixner , Carsten Emde , John Kacur , Sebastian Andrzej Siewior , Tom Zanussi , Clark Williams , Pavel Machek Cc: Daniel Wagner Subject: [PATCH RT 4/4] Linux 4.19.255-rt114-rc2 Date: Thu, 3 Nov 2022 08:55:48 +0100 Message-Id: <20221103075548.6477-5-wagi@monom.org> In-Reply-To: <20221103075548.6477-1-wagi@monom.org> References: <20221103075548.6477-1-wagi@monom.org> MIME-Version: 1.0 X-Last-TLS-Session-Version: TLSv1.3 Precedence: bulk List-ID: X-Mailing-List: linux-rt-users@vger.kernel.org v4.19.255-rt114-rc2 stable review patch. If anyone has any objections, please let me know. ----------- Signed-off-by: Daniel Wagner --- localversion-rt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/localversion-rt b/localversion-rt index fdcd9167ca0b..02781f8a2f80 100644 --- a/localversion-rt +++ b/localversion-rt @@ -1 +1 @@ --rt113 +-rt114-rc2