diff mbox series

[v7] PM: sleep: Expose last succeeded resumed timestamp in sysfs

Message ID 170359669607.1864392.5078004271237566637.stgit@mhiramat.roam.corp.google.com
State New
Headers show
Series [v7] PM: sleep: Expose last succeeded resumed timestamp in sysfs | expand

Commit Message

Masami Hiramatsu (Google) Dec. 26, 2023, 1:18 p.m. UTC
From: Masami Hiramatsu <mhiramat@kernel.org>

Expose last succeeded resumed timestamp as last_success_resume_time
attribute of suspend_stats in sysfs so that user can use this time
stamp as a reference point of resuming user space.

On some system like the ChromeOS, the system suspend and resume are
controlled by a power management process. The user-space tasks will be
noticed the suspend and the resume signal from it.
To improve the suspend/resume performance and/or to find regressions,
we would like to know how long the resume processes are taken in kernel
and in user-space.
For this purpose, expose the accarate time when the kernel is finished
to resume so that we can distinguish the duration of kernel resume and
user space resume.

This suspend_stats attribute is easy to access and only expose the
timestamp in CLOCK_MONOTONIC. User can find the accarate time when the
kernel finished to resume its drivers/subsystems and start thawing, and
measure the elapsed time from the time when the kernel finished the
resume to a user-space action (e.g. displaying the UI).

Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
---
 Changes in v7:
  - Update patch description.
  - Update sysfs documentation to say the exact timing.
  - Update the comment.
 Changes in v6:
  - Fix to record resume time before thawing user processes.
 Changes in v5:
  - Just updated for v6.7-rc3.
 Changes in v4.1:
  - Fix document typo (again).
 Changes in v4:
  - Update description to add why.
  - Fix document typo.
 Changes in v3:
  - Add (unsigned long long) casting for %llu.
  - Add a line after last_success_resume_time_show().
 Changes in v2:
  - Use %llu instead of %lu for printing u64 value.
  - Remove unneeded indent spaces from the last_success_resume_time
    line in the debugfs suspend_stat file.
---
 Documentation/ABI/testing/sysfs-power |   11 +++++++++++
 include/linux/suspend.h               |    2 ++
 kernel/power/main.c                   |   15 +++++++++++++++
 kernel/power/suspend.c                |    9 +++++++++
 4 files changed, 37 insertions(+)

Comments

Masami Hiramatsu (Google) Jan. 17, 2024, 12:07 a.m. UTC | #1
Gently ping,

I would like to know this is enough or I should add more info/update.

Thank you,

On Tue, 26 Dec 2023 22:18:16 +0900
"Masami Hiramatsu (Google)" <mhiramat@kernel.org> wrote:

> From: Masami Hiramatsu <mhiramat@kernel.org>
> 
> Expose last succeeded resumed timestamp as last_success_resume_time
> attribute of suspend_stats in sysfs so that user can use this time
> stamp as a reference point of resuming user space.
> 
> On some system like the ChromeOS, the system suspend and resume are
> controlled by a power management process. The user-space tasks will be
> noticed the suspend and the resume signal from it.
> To improve the suspend/resume performance and/or to find regressions,
> we would like to know how long the resume processes are taken in kernel
> and in user-space.
> For this purpose, expose the accarate time when the kernel is finished
> to resume so that we can distinguish the duration of kernel resume and
> user space resume.
> 
> This suspend_stats attribute is easy to access and only expose the
> timestamp in CLOCK_MONOTONIC. User can find the accarate time when the
> kernel finished to resume its drivers/subsystems and start thawing, and
> measure the elapsed time from the time when the kernel finished the
> resume to a user-space action (e.g. displaying the UI).
> 
> Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> ---
>  Changes in v7:
>   - Update patch description.
>   - Update sysfs documentation to say the exact timing.
>   - Update the comment.
>  Changes in v6:
>   - Fix to record resume time before thawing user processes.
>  Changes in v5:
>   - Just updated for v6.7-rc3.
>  Changes in v4.1:
>   - Fix document typo (again).
>  Changes in v4:
>   - Update description to add why.
>   - Fix document typo.
>  Changes in v3:
>   - Add (unsigned long long) casting for %llu.
>   - Add a line after last_success_resume_time_show().
>  Changes in v2:
>   - Use %llu instead of %lu for printing u64 value.
>   - Remove unneeded indent spaces from the last_success_resume_time
>     line in the debugfs suspend_stat file.
> ---
>  Documentation/ABI/testing/sysfs-power |   11 +++++++++++
>  include/linux/suspend.h               |    2 ++
>  kernel/power/main.c                   |   15 +++++++++++++++
>  kernel/power/suspend.c                |    9 +++++++++
>  4 files changed, 37 insertions(+)
> 
> diff --git a/Documentation/ABI/testing/sysfs-power b/Documentation/ABI/testing/sysfs-power
> index a3942b1036e2..ee567e7e9d4a 100644
> --- a/Documentation/ABI/testing/sysfs-power
> +++ b/Documentation/ABI/testing/sysfs-power
> @@ -442,6 +442,17 @@ Description:
>  		'total_hw_sleep' and 'last_hw_sleep' may not be accurate.
>  		This number is measured in microseconds.
>  
> +What:		/sys/power/suspend_stats/last_success_resume_time
> +Date:		Dec 2023
> +Contact:	Masami Hiramatsu <mhiramat@kernel.org>
> +Description:
> +		The /sys/power/suspend_stats/last_success_resume_time file
> +		contains the timestamp of when the kernel successfully
> +		resumed drivers/subsystems from suspend/hibernate. This is
> +		just before thawing the user processes.
> +		This floating point number is measured in seconds by monotonic
> +		clock.
> +
>  What:		/sys/power/sync_on_suspend
>  Date:		October 2019
>  Contact:	Jonas Meurer <jonas@freesources.org>
> diff --git a/include/linux/suspend.h b/include/linux/suspend.h
> index ef503088942d..ddd789044960 100644
> --- a/include/linux/suspend.h
> +++ b/include/linux/suspend.h
> @@ -8,6 +8,7 @@
>  #include <linux/pm.h>
>  #include <linux/mm.h>
>  #include <linux/freezer.h>
> +#include <linux/timekeeping.h>
>  #include <asm/errno.h>
>  
>  #ifdef CONFIG_VT
> @@ -71,6 +72,7 @@ struct suspend_stats {
>  	u64	last_hw_sleep;
>  	u64	total_hw_sleep;
>  	u64	max_hw_sleep;
> +	struct timespec64 last_success_resume_time;
>  	enum suspend_stat_step	failed_steps[REC_FAILED_NUM];
>  };
>  
> diff --git a/kernel/power/main.c b/kernel/power/main.c
> index f6425ae3e8b0..2ab23fd3daac 100644
> --- a/kernel/power/main.c
> +++ b/kernel/power/main.c
> @@ -421,6 +421,17 @@ static ssize_t last_failed_step_show(struct kobject *kobj,
>  }
>  static struct kobj_attribute last_failed_step = __ATTR_RO(last_failed_step);
>  
> +static ssize_t last_success_resume_time_show(struct kobject *kobj,
> +		struct kobj_attribute *attr, char *buf)
> +{
> +	return sprintf(buf, "%llu.%llu\n",
> +		(unsigned long long)suspend_stats.last_success_resume_time.tv_sec,
> +		(unsigned long long)suspend_stats.last_success_resume_time.tv_nsec);
> +}
> +
> +static struct kobj_attribute last_success_resume_time =
> +			__ATTR_RO(last_success_resume_time);
> +
>  static struct attribute *suspend_attrs[] = {
>  	&success.attr,
>  	&fail.attr,
> @@ -438,6 +449,7 @@ static struct attribute *suspend_attrs[] = {
>  	&last_hw_sleep.attr,
>  	&total_hw_sleep.attr,
>  	&max_hw_sleep.attr,
> +	&last_success_resume_time.attr,
>  	NULL,
>  };
>  
> @@ -514,6 +526,9 @@ static int suspend_stats_show(struct seq_file *s, void *unused)
>  			suspend_step_name(
>  				suspend_stats.failed_steps[index]));
>  	}
> +	seq_printf(s,	"last_success_resume_time:\t%-llu.%llu\n",
> +		   (unsigned long long)suspend_stats.last_success_resume_time.tv_sec,
> +		   (unsigned long long)suspend_stats.last_success_resume_time.tv_nsec);
>  
>  	return 0;
>  }
> diff --git a/kernel/power/suspend.c b/kernel/power/suspend.c
> index fa3bf161d13f..2d0f46b4d0cf 100644
> --- a/kernel/power/suspend.c
> +++ b/kernel/power/suspend.c
> @@ -595,6 +595,15 @@ static int enter_state(suspend_state_t state)
>   Finish:
>  	events_check_enabled = false;
>  	pm_pr_dbg("Finishing wakeup.\n");
> +
> +	/*
> +	 * Record last succeeded resume timestamp just before thawing processes.
> +	 * This is for helping users to measure user-space resume performance
> +	 * for improving their programs or finding regressions.
> +	 */
> +	if (!error)
> +		ktime_get_ts64(&suspend_stats.last_success_resume_time);
> +
>  	suspend_finish();
>   Unlock:
>  	mutex_unlock(&system_transition_mutex);
>
Rafael J. Wysocki Jan. 19, 2024, 9:07 p.m. UTC | #2
On Wed, Jan 17, 2024 at 1:07 AM Masami Hiramatsu <mhiramat@kernel.org> wrote:
>
> Gently ping,
>
> I would like to know this is enough or I should add more info/update.

I still am not sure what this is going to be useful for.

Do you have a specific example?

Also please note that suspend stats are going to be reworked shortly
and I would prefer to make any changes to it after the rework.

Thanks!
Masami Hiramatsu (Google) Jan. 25, 2024, 12:43 a.m. UTC | #3
On Mon, 22 Jan 2024 18:08:22 -0800
Brian Norris <briannorris@chromium.org> wrote:

> On Fri, Jan 19, 2024 at 1:08 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
> > On Wed, Jan 17, 2024 at 1:07 AM Masami Hiramatsu <mhiramat@kernel.org> wrote:
> > >
> > > Gently ping,
> > >
> > > I would like to know this is enough or I should add more info/update.
> >
> > I still am not sure what this is going to be useful for.
> >
> > Do you have a specific example?
> 
> Since there seems to be some communication gap here, I'll give it a try.
> 
> First, I'll paste the key phrase of its use case from the cover letter:
> 
>   "we would like to know how long the resume processes are taken in kernel
>   and in user-space"
> 
> This is a "system measurement" question, for use in tests (e.g., in a
> test lab for CI or for pre-release testing, where we suspend
> Chromebooks, wake them back up, and measure how long the wakeup took)
> or for user-reported metrics (e.g., similar statistics from real
> users' systems, if they've agreed to automatically report usage
> statistics, back to Google). We'd like to know how long it takes for a
> system to wake up, so we can detect when there are problems that lead
> to a slow system-resume experience. The user experience includes both
> time spent in the kernel and time spent after user space has thawed
> (and is spending time in potentially complex power and display manager
> stacks) before a Chromebook's display lights back up.

Thanks Brian for explaining, this is correctly explained how we are
using this for measuring resume process duration.

> If I understand the whole of Masami's work correctly, I believe we're
> taking "timestamps parsed out of dmesg" (or potentially out of ftrace,
> trace events, etc.) to measure the kernel side, plus "timestamp
> provided here in CLOCK_MONOTONIC" and "timestamp determined in our
> power/display managers" to measure user space.

Yes, I decided to decouple the kernel and user space because the clock
subsystem is adjusted when resuming. So for the kernel, we will use
local clock (which is not exposed to user space), and use CLOCK_MONOTONIC
for the user space.

> Does that make sense? Or are we still missing something "specific" for
> you? I could give code pointers [1], as it's all open source. But I'm
> not sure browsing our metric-collection code would help understanding
> any more than these explanations.

I hope it helps you understand more about this. If you have further
questions, I will be happy to explain.

> (TBH, this all still seems kinda odd to me, since parsing dmesg isn't
> a great way to get machine-readable information. But this at least
> serves to close some gaps in measurement.)

Yeah, if I can add more in the stat, I would like to add another duration
of the kernel resuming as "last_success_resume_duration". Is that smarter
solution? Or maybe we also can use ftrace for kernel things. But anyway,
to measure the user-space things, in user-space, we need a reference point
of start of resuming.

Thank you,

> 
> Brian
> 
> [1] e.g., https://source.chromium.org/chromiumos/chromiumos/codesearch/+/main:src/platform2/power_manager/powerd/metrics_collector.cc;l=294;drc=ce8075df179c4f8b2f4e4c4df6978d3df665c4d1
Rafael J. Wysocki Jan. 25, 2024, 8:19 p.m. UTC | #4
On Thu, Jan 25, 2024 at 1:43 AM Masami Hiramatsu <mhiramat@kernel.org> wrote:
>
> On Mon, 22 Jan 2024 18:08:22 -0800
> Brian Norris <briannorris@chromium.org> wrote:
>
> > On Fri, Jan 19, 2024 at 1:08 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
> > > On Wed, Jan 17, 2024 at 1:07 AM Masami Hiramatsu <mhiramat@kernel.org> wrote:
> > > >
> > > > Gently ping,
> > > >
> > > > I would like to know this is enough or I should add more info/update.
> > >
> > > I still am not sure what this is going to be useful for.
> > >
> > > Do you have a specific example?
> >
> > Since there seems to be some communication gap here, I'll give it a try.
> >
> > First, I'll paste the key phrase of its use case from the cover letter:
> >
> >   "we would like to know how long the resume processes are taken in kernel
> >   and in user-space"
> >
> > This is a "system measurement" question, for use in tests (e.g., in a
> > test lab for CI or for pre-release testing, where we suspend
> > Chromebooks, wake them back up, and measure how long the wakeup took)
> > or for user-reported metrics (e.g., similar statistics from real
> > users' systems, if they've agreed to automatically report usage
> > statistics, back to Google). We'd like to know how long it takes for a
> > system to wake up, so we can detect when there are problems that lead
> > to a slow system-resume experience. The user experience includes both
> > time spent in the kernel and time spent after user space has thawed
> > (and is spending time in potentially complex power and display manager
> > stacks) before a Chromebook's display lights back up.
>
> Thanks Brian for explaining, this is correctly explained how we are
> using this for measuring resume process duration.
>
> > If I understand the whole of Masami's work correctly, I believe we're
> > taking "timestamps parsed out of dmesg" (or potentially out of ftrace,
> > trace events, etc.) to measure the kernel side, plus "timestamp
> > provided here in CLOCK_MONOTONIC" and "timestamp determined in our
> > power/display managers" to measure user space.
>
> Yes, I decided to decouple the kernel and user space because the clock
> subsystem is adjusted when resuming. So for the kernel, we will use
> local clock (which is not exposed to user space), and use CLOCK_MONOTONIC
> for the user space.

The problem with this split is that you cannot know how much time
elapses between the "successful kernel resume time" and the time when
user space gets to resume.

As of this patch, the kernel timestamp is taken when the kernel is
about to thaw user space and some user space tasks may start running
right away.

Some other tasks, however, will wait for what happens next in the
kernel (because it is not done with resuming yet) and some of them
will wait until explicitly asked to resume by the resume process IIUC.

Your results depend on which tasks participate in the "user
experience", so to speak.  If they are the tasks that wait to be
kicked by the resume process, the kernel timestamp taken as per the
above is useless for them, because there is quite some stuff that
happens in the kernel before they will get kicked.

Moreover, some tasks will wait for certain device drivers to get ready
after the rest of the system resumes and that may still take some more
time after the kernel has returned to the process driving the system
suspend-resume.

I'm not sure if there is a single point which can be used as a "user
space resume start" time for every task, which is why I'm not
convinced about this patch.

BTW, there is a utility called sleepgraph that measures the kernel
part of the system suspend-resume.  It does its best to measure it
very precisely and uses different techniques for that.  Also, it is
included in the kernel source tree.  Can you please have a look at it
and see how much there is in common between it and your tools?  Maybe
there are some interfaces that can be used in common, or maybe it
could benefit from some interfaces that you are planning to add.
diff mbox series

Patch

diff --git a/Documentation/ABI/testing/sysfs-power b/Documentation/ABI/testing/sysfs-power
index a3942b1036e2..ee567e7e9d4a 100644
--- a/Documentation/ABI/testing/sysfs-power
+++ b/Documentation/ABI/testing/sysfs-power
@@ -442,6 +442,17 @@  Description:
 		'total_hw_sleep' and 'last_hw_sleep' may not be accurate.
 		This number is measured in microseconds.
 
+What:		/sys/power/suspend_stats/last_success_resume_time
+Date:		Dec 2023
+Contact:	Masami Hiramatsu <mhiramat@kernel.org>
+Description:
+		The /sys/power/suspend_stats/last_success_resume_time file
+		contains the timestamp of when the kernel successfully
+		resumed drivers/subsystems from suspend/hibernate. This is
+		just before thawing the user processes.
+		This floating point number is measured in seconds by monotonic
+		clock.
+
 What:		/sys/power/sync_on_suspend
 Date:		October 2019
 Contact:	Jonas Meurer <jonas@freesources.org>
diff --git a/include/linux/suspend.h b/include/linux/suspend.h
index ef503088942d..ddd789044960 100644
--- a/include/linux/suspend.h
+++ b/include/linux/suspend.h
@@ -8,6 +8,7 @@ 
 #include <linux/pm.h>
 #include <linux/mm.h>
 #include <linux/freezer.h>
+#include <linux/timekeeping.h>
 #include <asm/errno.h>
 
 #ifdef CONFIG_VT
@@ -71,6 +72,7 @@  struct suspend_stats {
 	u64	last_hw_sleep;
 	u64	total_hw_sleep;
 	u64	max_hw_sleep;
+	struct timespec64 last_success_resume_time;
 	enum suspend_stat_step	failed_steps[REC_FAILED_NUM];
 };
 
diff --git a/kernel/power/main.c b/kernel/power/main.c
index f6425ae3e8b0..2ab23fd3daac 100644
--- a/kernel/power/main.c
+++ b/kernel/power/main.c
@@ -421,6 +421,17 @@  static ssize_t last_failed_step_show(struct kobject *kobj,
 }
 static struct kobj_attribute last_failed_step = __ATTR_RO(last_failed_step);
 
+static ssize_t last_success_resume_time_show(struct kobject *kobj,
+		struct kobj_attribute *attr, char *buf)
+{
+	return sprintf(buf, "%llu.%llu\n",
+		(unsigned long long)suspend_stats.last_success_resume_time.tv_sec,
+		(unsigned long long)suspend_stats.last_success_resume_time.tv_nsec);
+}
+
+static struct kobj_attribute last_success_resume_time =
+			__ATTR_RO(last_success_resume_time);
+
 static struct attribute *suspend_attrs[] = {
 	&success.attr,
 	&fail.attr,
@@ -438,6 +449,7 @@  static struct attribute *suspend_attrs[] = {
 	&last_hw_sleep.attr,
 	&total_hw_sleep.attr,
 	&max_hw_sleep.attr,
+	&last_success_resume_time.attr,
 	NULL,
 };
 
@@ -514,6 +526,9 @@  static int suspend_stats_show(struct seq_file *s, void *unused)
 			suspend_step_name(
 				suspend_stats.failed_steps[index]));
 	}
+	seq_printf(s,	"last_success_resume_time:\t%-llu.%llu\n",
+		   (unsigned long long)suspend_stats.last_success_resume_time.tv_sec,
+		   (unsigned long long)suspend_stats.last_success_resume_time.tv_nsec);
 
 	return 0;
 }
diff --git a/kernel/power/suspend.c b/kernel/power/suspend.c
index fa3bf161d13f..2d0f46b4d0cf 100644
--- a/kernel/power/suspend.c
+++ b/kernel/power/suspend.c
@@ -595,6 +595,15 @@  static int enter_state(suspend_state_t state)
  Finish:
 	events_check_enabled = false;
 	pm_pr_dbg("Finishing wakeup.\n");
+
+	/*
+	 * Record last succeeded resume timestamp just before thawing processes.
+	 * This is for helping users to measure user-space resume performance
+	 * for improving their programs or finding regressions.
+	 */
+	if (!error)
+		ktime_get_ts64(&suspend_stats.last_success_resume_time);
+
 	suspend_finish();
  Unlock:
 	mutex_unlock(&system_transition_mutex);