diff mbox series

[v2] freezer,sched: Use saved_state to reduce some spurious wakeups

Message ID 20230830-avoid-spurious-freezer-wakeups-v2-1-8877245cdbdc@quicinc.com
State New
Headers show
Series [v2] freezer,sched: Use saved_state to reduce some spurious wakeups | expand

Commit Message

Elliot Berman Aug. 30, 2023, 5:42 p.m. UTC
After commit f5d39b020809 ("freezer,sched: Rewrite core freezer logic"),
tasks that transition directly from TASK_FREEZABLE to TASK_FROZEN  are
always woken up on the thaw path. Prior to that commit, tasks could ask
freezer to consider them "frozen enough" via freezer_do_not_count(). The
commit replaced freezer_do_not_count() with a TASK_FREEZABLE state which
allows freezer to immediately mark the task as TASK_FROZEN without
waking up the task.  This is efficient for the suspend path, but on the
thaw path, the task is always woken up even if the task didn't need to
wake up and goes back to its TASK_(UN)INTERRUPTIBLE state. Although
these tasks are capable of handling of the wakeup, we can observe a
power/perf impact from the extra wakeup.

We observed on Android many tasks wait in the TASK_FREEZABLE state
(particularly due to many of them being binder clients). We observed
nearly 4x the number of tasks and a corresponding linear increase in
latency and power consumption when thawing the system. The latency
increased from ~15ms to ~50ms.

Avoid the spurious wakeups by saving the state of TASK_FREEZABLE tasks.
If the task was running before entering TASK_FROZEN state
(__refrigerator()) or if the task received a wake up for the saved
state, then the task is woken on thaw. saved_state from PREEMPT_RT locks
can be re-used because freezer would not stomp on the rtlock wait flow:
TASK_RTLOCK_WAIT isn't considered freezable.

Reported-by: Prakash Viswalingam <quic_prakashv@quicinc.com>
Signed-off-by: Elliot Berman <quic_eberman@quicinc.com>
---
For testing purposes, I use these commands can help see how many tasks were
woken during thawing:

1. Setup:
   mkdir /sys/kernel/tracing/instances/freezer
   cd /sys/kernel/tracing/instances/freezer 
   echo 0 > tracing_on ; echo > trace
   echo power:suspend_resume > set_event
   echo 'enable_event:sched:sched_wakeup if action == "thaw_processes" && start == 1' > events/power/suspend_resume/trigger
   echo 'traceoff if action == "thaw_processes" && start == 0' > events/power/suspend_resume/trigger
   echo 1 > tracing_on

2. Let kernel go to suspend

3. After kernel's back up:
   cat /sys/kernel/tracing/instances/freezer/trace | grep sched_wakeup | grep -o "pid=[0-9]*" | sort -u | wc -l
---
Changes in v2:
- Rebase to v6.5 which includes commit 1c06918788e8 ("sched: Consider task_struct::saved_state in wait_task_inactive()")
  This allows us to remove the special jobctl handling on thaw.
- Link to v1: https://lore.kernel.org/r/20230828-avoid-spurious-freezer-wakeups-v1-1-8be8cf761472@quicinc.com
---
 include/linux/sched.h |  4 ++--
 kernel/freezer.c      | 41 +++++++++++++++++++----------------------
 kernel/sched/core.c   | 29 +++++++++++++++++------------
 3 files changed, 38 insertions(+), 36 deletions(-)


---
base-commit: 2dde18cd1d8fac735875f2e4987f11817cc0bc2c
change-id: 20230817-avoid-spurious-freezer-wakeups-9f8619680b3a

Best regards,

Comments

Peter Zijlstra Sept. 4, 2023, 9:23 p.m. UTC | #1
On Wed, Aug 30, 2023 at 10:42:39AM -0700, Elliot Berman wrote:

> Avoid the spurious wakeups by saving the state of TASK_FREEZABLE tasks.
> If the task was running before entering TASK_FROZEN state
> (__refrigerator()) or if the task received a wake up for the saved
> state, then the task is woken on thaw. saved_state from PREEMPT_RT locks
> can be re-used because freezer would not stomp on the rtlock wait flow:
> TASK_RTLOCK_WAIT isn't considered freezable.

You don't actually assert that anywhere I think, so the moment someone
makes that happen you crash and burn.

Also:

> -#ifdef CONFIG_PREEMPT_RT
> +#if IS_ENABLED(CONFIG_PREEMPT_RT) || IS_ENABLED(CONFIG_FREEZER)

That makes wakeup more horrible for everyone :/
Elliot Berman Sept. 5, 2023, 3:59 a.m. UTC | #2
On 9/4/2023 2:23 PM, Peter Zijlstra wrote:
> On Wed, Aug 30, 2023 at 10:42:39AM -0700, Elliot Berman wrote:
> 
>> Avoid the spurious wakeups by saving the state of TASK_FREEZABLE tasks.
>> If the task was running before entering TASK_FROZEN state
>> (__refrigerator()) or if the task received a wake up for the saved
>> state, then the task is woken on thaw. saved_state from PREEMPT_RT locks
>> can be re-used because freezer would not stomp on the rtlock wait flow:
>> TASK_RTLOCK_WAIT isn't considered freezable.
> 
> You don't actually assert that anywhere I think, so the moment someone
> makes that happen you crash and burn.
> 

I can certainly add an assertion on the freezer side.

> Also:
> 
>> -#ifdef CONFIG_PREEMPT_RT
>> +#if IS_ENABLED(CONFIG_PREEMPT_RT) || IS_ENABLED(CONFIG_FREEZER)
> 
> That makes wakeup more horrible for everyone :/

I don't think the hot wakeup path is significantly impacted because the 
added checks come after the hot path is already not taken.

wait_task_inactive() is impacted in the case of contention on pi_lock, 
but I don't think that is part of any hot path.

I'll run some further tests on my end to be sure about the wake up 
latency. Are there any benchmarks/tests you like for measuring the hot 
path? I can run those as well.

Thanks,
Elliot
Peter Zijlstra Sept. 7, 2023, 9:46 a.m. UTC | #3
On Mon, Sep 04, 2023 at 08:59:03PM -0700, Elliot Berman wrote:
> 
> 
> On 9/4/2023 2:23 PM, Peter Zijlstra wrote:
> > On Wed, Aug 30, 2023 at 10:42:39AM -0700, Elliot Berman wrote:
> > 
> > > Avoid the spurious wakeups by saving the state of TASK_FREEZABLE tasks.
> > > If the task was running before entering TASK_FROZEN state
> > > (__refrigerator()) or if the task received a wake up for the saved
> > > state, then the task is woken on thaw. saved_state from PREEMPT_RT locks
> > > can be re-used because freezer would not stomp on the rtlock wait flow:
> > > TASK_RTLOCK_WAIT isn't considered freezable.
> > 
> > You don't actually assert that anywhere I think, so the moment someone
> > makes that happen you crash and burn.
> > 
> 
> I can certainly add an assertion on the freezer side.

I think the assertion we have in ttwu_state_match() might be sufficient.

> > Also:
> > 
> > > -#ifdef CONFIG_PREEMPT_RT
> > > +#if IS_ENABLED(CONFIG_PREEMPT_RT) || IS_ENABLED(CONFIG_FREEZER)
> > 
> > That makes wakeup more horrible for everyone :/
> 
> I don't think the hot wakeup path is significantly impacted because the
> added checks come after the hot path is already not taken.

Perhaps we should start off by doing the below, instead of making it
more complicated instead. I suppose you're right about the overhead, but
run a hackbench just to make sure or something.


diff --git a/include/linux/sched.h b/include/linux/sched.h
index 77f01ac385f7..649ddb9adf0d 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -749,11 +749,7 @@ struct task_struct {
 	struct thread_info		thread_info;
 #endif
 	unsigned int			__state;
-
-#ifdef CONFIG_PREEMPT_RT
-	/* saved state for "spinlock sleepers" */
 	unsigned int			saved_state;
-#endif
 
 	/*
 	 * This begins the randomizable portion of task_struct. Only
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 2299a5cfbfb9..b566821614e1 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2239,31 +2239,21 @@ int __task_state_match(struct task_struct *p, unsigned int state)
 	if (READ_ONCE(p->__state) & state)
 		return 1;
 
-#ifdef CONFIG_PREEMPT_RT
 	if (READ_ONCE(p->saved_state) & state)
 		return -1;
-#endif
+
 	return 0;
 }
 
 static __always_inline
 int task_state_match(struct task_struct *p, unsigned int state)
 {
-#ifdef CONFIG_PREEMPT_RT
-	int match;
-
 	/*
 	 * Serialize against current_save_and_set_rtlock_wait_state() and
 	 * current_restore_rtlock_saved_state().
 	 */
-	raw_spin_lock_irq(&p->pi_lock);
-	match = __task_state_match(p, state);
-	raw_spin_unlock_irq(&p->pi_lock);
-
-	return match;
-#else
+	guard(spin_lock_irq)(&p->pi_lock);
 	return __task_state_match(p, state);
-#endif
 }
 
 /*
@@ -4056,7 +4046,6 @@ bool ttwu_state_match(struct task_struct *p, unsigned int state, int *success)
 
 	*success = !!(match = __task_state_match(p, state));
 
-#ifdef CONFIG_PREEMPT_RT
 	/*
 	 * Saved state preserves the task state across blocking on
 	 * an RT lock.  If the state matches, set p::saved_state to
@@ -4072,7 +4061,7 @@ bool ttwu_state_match(struct task_struct *p, unsigned int state, int *success)
 	 */
 	if (match < 0)
 		p->saved_state = TASK_RUNNING;
-#endif
+
 	return match > 0;
 }
Elliot Berman Sept. 8, 2023, 8:08 p.m. UTC | #4
On 9/7/2023 2:46 AM, Peter Zijlstra wrote:
> On Mon, Sep 04, 2023 at 08:59:03PM -0700, Elliot Berman wrote:
>>
>>
>> On 9/4/2023 2:23 PM, Peter Zijlstra wrote:
>>> On Wed, Aug 30, 2023 at 10:42:39AM -0700, Elliot Berman wrote:
>>>
>>>> Avoid the spurious wakeups by saving the state of TASK_FREEZABLE tasks.
>>>> If the task was running before entering TASK_FROZEN state
>>>> (__refrigerator()) or if the task received a wake up for the saved
>>>> state, then the task is woken on thaw. saved_state from PREEMPT_RT locks
>>>> can be re-used because freezer would not stomp on the rtlock wait flow:
>>>> TASK_RTLOCK_WAIT isn't considered freezable.
>>>
>>> You don't actually assert that anywhere I think, so the moment someone
>>> makes that happen you crash and burn.
>>>
>>
>> I can certainly add an assertion on the freezer side.
> 
> I think the assertion we have in ttwu_state_match() might be sufficient.
> 

That assertion checks that you only try to wake up with only
TASK_RTLOCK_WAIT and no other bits. I think it is probably good to also
have assertions that check that TASK_RTLOCK_WAIT and TASK_FROZEN are
exclusive bits and. I can add these assertions (a separate patch?), but
I think those checks would impact the hot path to do the extra tests.

>>> Also:
>>>
>>>> -#ifdef CONFIG_PREEMPT_RT
>>>> +#if IS_ENABLED(CONFIG_PREEMPT_RT) || IS_ENABLED(CONFIG_FREEZER)
>>>
>>> That makes wakeup more horrible for everyone :/
>>
>> I don't think the hot wakeup path is significantly impacted because the
>> added checks come after the hot path is already not taken.
> 
> Perhaps we should start off by doing the below, instead of making it
> more complicated instead. I suppose you're right about the overhead, but
> run a hackbench just to make sure or something.
> 

I ran perf bench sched message -g 40 -l 40 with the v3 patch [1]. After 60
iterations each, I don't see a significant difference on my arm64 platform:
both samples ~normal and ~eq variance w/t-test p-value: 0.79.

We also ran typical high level benchmarks for our SoCs (antutu,
geekbench, et. al) and didn't see any regressions there.

[1]: https://lore.kernel.org/all/20230908-avoid-spurious-freezer-wakeups-v3-1-d49821fda04d@quicinc.com/

Thanks,
Elliot
Peter Zijlstra Sept. 8, 2023, 10:08 p.m. UTC | #5
On Fri, Sep 08, 2023 at 01:08:07PM -0700, Elliot Berman wrote:

> > Perhaps we should start off by doing the below, instead of making it
> > more complicated instead. I suppose you're right about the overhead, but
> > run a hackbench just to make sure or something.
> > 
> 
> I ran perf bench sched message -g 40 -l 40 with the v3 patch [1]. After 60
> iterations each, I don't see a significant difference on my arm64 platform:
> both samples ~normal and ~eq variance w/t-test p-value: 0.79.
> 
> We also ran typical high level benchmarks for our SoCs (antutu,
> geekbench, et. al) and didn't see any regressions there.

So if you would've made this 2 patches, the first removing the ifdef,
then the changelog for that patch would be a good place to mention it
doesn't measurably regress things.

As a bonus, it then makes your other changes smaller too ;-)
Elliot Berman Sept. 8, 2023, 10:30 p.m. UTC | #6
On 9/8/2023 3:08 PM, Peter Zijlstra wrote:
> On Fri, Sep 08, 2023 at 01:08:07PM -0700, Elliot Berman wrote:
> 
>>> Perhaps we should start off by doing the below, instead of making it
>>> more complicated instead. I suppose you're right about the overhead, but
>>> run a hackbench just to make sure or something.
>>>
>>
>> I ran perf bench sched message -g 40 -l 40 with the v3 patch [1]. After 60
>> iterations each, I don't see a significant difference on my arm64 platform:
>> both samples ~normal and ~eq variance w/t-test p-value: 0.79.
>>
>> We also ran typical high level benchmarks for our SoCs (antutu,
>> geekbench, et. al) and didn't see any regressions there.
> 
> So if you would've made this 2 patches, the first removing the ifdef,
> then the changelog for that patch would be a good place to mention it
> doesn't measurably regress things.

No problem, easily done.

> As a bonus, it then makes your other changes smaller too ;-)

Did you mean that each commit is smaller but overall delta is the same
or something else? I still wanted to update comments on saved_state in
kernel/sched/core.c as it gives good explanation of what is going on. I
have split the commit but want to make sure I make the changes you were
thinking :-)

Thanks,
Elliot
Peter Zijlstra Sept. 8, 2023, 10:48 p.m. UTC | #7
On Fri, Sep 08, 2023 at 03:30:43PM -0700, Elliot Berman wrote:
> 
> 
> On 9/8/2023 3:08 PM, Peter Zijlstra wrote:
> > On Fri, Sep 08, 2023 at 01:08:07PM -0700, Elliot Berman wrote:
> > 
> >>> Perhaps we should start off by doing the below, instead of making it
> >>> more complicated instead. I suppose you're right about the overhead, but
> >>> run a hackbench just to make sure or something.
> >>>
> >>
> >> I ran perf bench sched message -g 40 -l 40 with the v3 patch [1]. After 60
> >> iterations each, I don't see a significant difference on my arm64 platform:
> >> both samples ~normal and ~eq variance w/t-test p-value: 0.79.
> >>
> >> We also ran typical high level benchmarks for our SoCs (antutu,
> >> geekbench, et. al) and didn't see any regressions there.
> > 
> > So if you would've made this 2 patches, the first removing the ifdef,
> > then the changelog for that patch would be a good place to mention it
> > doesn't measurably regress things.
> 
> No problem, easily done.
> 
> > As a bonus, it then makes your other changes smaller too ;-)
> 
> Did you mean that each commit is smaller but overall delta is the same
> or something else? 

That.

> I still wanted to update comments on saved_state in
> kernel/sched/core.c as it gives good explanation of what is going on. I
> have split the commit but want to make sure I make the changes you were
> thinking :-)

well, it's nearly 1am, I'm not thinking very much :-) Changing those
comments seems fine when you add the freezer thing.
Elliot Berman Sept. 8, 2023, 11:17 p.m. UTC | #8
On 9/8/2023 3:48 PM, Peter Zijlstra wrote:
> On Fri, Sep 08, 2023 at 03:30:43PM -0700, Elliot Berman wrote:
>>
>>
>> On 9/8/2023 3:08 PM, Peter Zijlstra wrote:
>>> On Fri, Sep 08, 2023 at 01:08:07PM -0700, Elliot Berman wrote:
>>>
>>>>> Perhaps we should start off by doing the below, instead of making it
>>>>> more complicated instead. I suppose you're right about the overhead, but
>>>>> run a hackbench just to make sure or something.
>>>>>
>>>>
>>>> I ran perf bench sched message -g 40 -l 40 with the v3 patch [1]. After 60
>>>> iterations each, I don't see a significant difference on my arm64 platform:
>>>> both samples ~normal and ~eq variance w/t-test p-value: 0.79.
>>>>
>>>> We also ran typical high level benchmarks for our SoCs (antutu,
>>>> geekbench, et. al) and didn't see any regressions there.
>>>
>>> So if you would've made this 2 patches, the first removing the ifdef,
>>> then the changelog for that patch would be a good place to mention it
>>> doesn't measurably regress things.
>>
>> No problem, easily done.
>>
>>> As a bonus, it then makes your other changes smaller too ;-)
>>
>> Did you mean that each commit is smaller but overall delta is the same
>> or something else? 
> 
> That.
> 
>> I still wanted to update comments on saved_state in
>> kernel/sched/core.c as it gives good explanation of what is going on. I
>> have split the commit but want to make sure I make the changes you were
>> thinking :-)
> 
> well, it's nearly 1am, I'm not thinking very much :-) Changing those
> comments seems fine when you add the freezer thing.

I was wondering what time zone you are in, I saw your previous replies
are early in my morning. I think you are giving Greg a run for his money
with responses at all hours :-) 

I sent v4 with the changes split:
https://lore.kernel.org/all/20230908-avoid-spurious-freezer-wakeups-v4-0-6155aa3dafae@quicinc.com/
Peter Zijlstra Sept. 9, 2023, 9:29 a.m. UTC | #9
On Fri, Sep 08, 2023 at 04:17:09PM -0700, Elliot Berman wrote:

> I was wondering what time zone you are in, I saw your previous replies
> are early in my morning. I think you are giving Greg a run for his money
> with responses at all hours :-) 

Hehe, Greg and me are both in .nl. Me being an actual native here and
Greg is an expat, but he seems to enjoy the country :-)
diff mbox series

Patch

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 609bde814cb0..33124daa8f5e 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -745,8 +745,8 @@  struct task_struct {
 #endif
 	unsigned int			__state;
 
-#ifdef CONFIG_PREEMPT_RT
-	/* saved state for "spinlock sleepers" */
+#if IS_ENABLED(CONFIG_PREEMPT_RT) || IS_ENABLED(CONFIG_FREEZER)
+	/* saved state for "spinlock sleepers" and freezer */
 	unsigned int			saved_state;
 #endif
 
diff --git a/kernel/freezer.c b/kernel/freezer.c
index 4fad0e6fca64..c450fa8b8b5e 100644
--- a/kernel/freezer.c
+++ b/kernel/freezer.c
@@ -71,7 +71,11 @@  bool __refrigerator(bool check_kthr_stop)
 	for (;;) {
 		bool freeze;
 
+		raw_spin_lock_irq(&current->pi_lock);
 		set_current_state(TASK_FROZEN);
+		/* unstale saved_state so that __thaw_task() will wake us up */
+		current->saved_state = TASK_RUNNING;
+		raw_spin_unlock_irq(&current->pi_lock);
 
 		spin_lock_irq(&freezer_lock);
 		freeze = freezing(current) && !(check_kthr_stop && kthread_should_stop());
@@ -129,6 +133,7 @@  static int __set_task_frozen(struct task_struct *p, void *arg)
 		WARN_ON_ONCE(debug_locks && p->lockdep_depth);
 #endif
 
+	p->saved_state = p->__state;
 	WRITE_ONCE(p->__state, TASK_FROZEN);
 	return TASK_FROZEN;
 }
@@ -170,42 +175,34 @@  bool freeze_task(struct task_struct *p)
 }
 
 /*
- * The special task states (TASK_STOPPED, TASK_TRACED) keep their canonical
- * state in p->jobctl. If either of them got a wakeup that was missed because
- * TASK_FROZEN, then their canonical state reflects that and the below will
- * refuse to restore the special state and instead issue the wakeup.
+ * Restore the saved_state before the task entered freezer. For typical task
+ * in the __refrigerator(), saved_state == TASK_RUNNING so nothing happens
+ * here. For tasks which were TASK_NORMAL | TASK_FREEZABLE, their initial state
+ * is restored unless they got an expected wakeup (see ttwu_state_match()).
+ * Returns 1 if the task state was restored.
  */
-static int __set_task_special(struct task_struct *p, void *arg)
+static int __restore_freezer_state(struct task_struct *p, void *arg)
 {
-	unsigned int state = 0;
+	unsigned int state = p->saved_state;
 
-	if (p->jobctl & JOBCTL_TRACED)
-		state = TASK_TRACED;
-
-	else if (p->jobctl & JOBCTL_STOPPED)
-		state = TASK_STOPPED;
-
-	if (state)
+	if (state != TASK_RUNNING) {
 		WRITE_ONCE(p->__state, state);
+		return 1;
+	}
 
-	return state;
+	return 0;
 }
 
 void __thaw_task(struct task_struct *p)
 {
-	unsigned long flags, flags2;
+	unsigned long flags;
 
 	spin_lock_irqsave(&freezer_lock, flags);
 	if (WARN_ON_ONCE(freezing(p)))
 		goto unlock;
 
-	if (lock_task_sighand(p, &flags2)) {
-		/* TASK_FROZEN -> TASK_{STOPPED,TRACED} */
-		bool ret = task_call_func(p, __set_task_special, NULL);
-		unlock_task_sighand(p, &flags2);
-		if (ret)
-			goto unlock;
-	}
+	if (task_call_func(p, __restore_freezer_state, NULL))
+		goto unlock;
 
 	wake_up_state(p, TASK_FROZEN);
 unlock:
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index c52c2eba7c73..085624e3098d 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2219,7 +2219,7 @@  int __task_state_match(struct task_struct *p, unsigned int state)
 	if (READ_ONCE(p->__state) & state)
 		return 1;
 
-#ifdef CONFIG_PREEMPT_RT
+#if IS_ENABLED(CONFIG_PREEMPT_RT) || IS_ENABLED(CONFIG_FREEZER)
 	if (READ_ONCE(p->saved_state) & state)
 		return -1;
 #endif
@@ -2229,12 +2229,12 @@  int __task_state_match(struct task_struct *p, unsigned int state)
 static __always_inline
 int task_state_match(struct task_struct *p, unsigned int state)
 {
-#ifdef CONFIG_PREEMPT_RT
+#if IS_ENABLED(CONFIG_PREEMPT_RT) || IS_ENABLED(CONFIG_FREEZER)
 	int match;
 
 	/*
-	 * Serialize against current_save_and_set_rtlock_wait_state() and
-	 * current_restore_rtlock_saved_state().
+	 * Serialize against current_save_and_set_rtlock_wait_state(),
+	 * current_restore_rtlock_saved_state(), and __refrigerator().
 	 */
 	raw_spin_lock_irq(&p->pi_lock);
 	match = __task_state_match(p, state);
@@ -4033,13 +4033,17 @@  static void ttwu_queue(struct task_struct *p, int cpu, int wake_flags)
  * The caller holds p::pi_lock if p != current or has preemption
  * disabled when p == current.
  *
- * The rules of PREEMPT_RT saved_state:
+ * The rules of saved_state:
  *
  *   The related locking code always holds p::pi_lock when updating
  *   p::saved_state, which means the code is fully serialized in both cases.
  *
- *   The lock wait and lock wakeups happen via TASK_RTLOCK_WAIT. No other
- *   bits set. This allows to distinguish all wakeup scenarios.
+ *   For PREEMPT_RT, the lock wait and lock wakeups happen via TASK_RTLOCK_WAIT.
+ *   No other bits set. This allows to distinguish all wakeup scenarios.
+ *
+ *   For FREEZER, the wakeup happens via TASK_FROZEN. No other bits set. This
+ *   allows us to prevent early wakeup of tasks before they can be run on
+ *   asymmetric ISA architectures (eg ARMv9).
  */
 static __always_inline
 bool ttwu_state_match(struct task_struct *p, unsigned int state, int *success)
@@ -4053,13 +4057,14 @@  bool ttwu_state_match(struct task_struct *p, unsigned int state, int *success)
 
 	*success = !!(match = __task_state_match(p, state));
 
-#ifdef CONFIG_PREEMPT_RT
+#if IS_ENABLED(CONFIG_PREEMPT_RT) || IS_ENABLED(CONFIG_FREEZER)
 	/*
 	 * Saved state preserves the task state across blocking on
-	 * an RT lock.  If the state matches, set p::saved_state to
-	 * TASK_RUNNING, but do not wake the task because it waits
-	 * for a lock wakeup. Also indicate success because from
-	 * the regular waker's point of view this has succeeded.
+	 * an RT lock or TASK_FREEZABLE tasks.  If the state matches,
+	 * set p::saved_state to TASK_RUNNING, but do not wake the task
+	 * because it waits for a lock wakeup or __thaw_task(). Also
+	 * indicate success because from the regular waker's point of
+	 * view this has succeeded.
 	 *
 	 * After acquiring the lock the task will restore p::__state
 	 * from p::saved_state which ensures that the regular