Message ID | 20240820163512.1096301-15-qyousef@layalina.io |
---|---|
State | New |
Headers | show |
Series | sched/fair/schedutil: Better manage system response time | expand |
On Tue, Aug 20, 2024 at 05:35:10PM +0100, Qais Yousef wrote: > It means we're being idling or doing less work and are already running > at a higher value. No need to apply any dvfs headroom in this case. > > Signed-off-by: Qais Yousef <qyousef@layalina.io> > --- > kernel/sched/cpufreq_schedutil.c | 11 ++++++++--- > 1 file changed, 8 insertions(+), 3 deletions(-) > > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c > index 318b09bc4ab1..4a1a8b353d51 100644 > --- a/kernel/sched/cpufreq_schedutil.c > +++ b/kernel/sched/cpufreq_schedutil.c > @@ -9,6 +9,7 @@ > #define IOWAIT_BOOST_MIN (SCHED_CAPACITY_SCALE / 8) > > DEFINE_PER_CPU_READ_MOSTLY(unsigned long, response_time_mult); > +DEFINE_PER_CPU(unsigned long, last_update_util); > > struct sugov_tunables { > struct gov_attr_set attr_set; > @@ -262,15 +263,19 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy, > * Also take into accounting how long tasks have been waiting in runnable but > * !running state. If it is high, it means we need higher DVFS headroom to > * reduce it. > - * > - * XXX: Should we provide headroom when the util is decaying? > */ > static inline unsigned long sugov_apply_dvfs_headroom(unsigned long util, int cpu) > { > - unsigned long update_headroom, waiting_headroom; > + unsigned long update_headroom, waiting_headroom, prev_util; > struct rq *rq = cpu_rq(cpu); > u64 delay; > > + prev_util = per_cpu(last_update_util, cpu); > + per_cpu(last_update_util, cpu) = util; > + > + if (util < prev_util) > + return util; > + > /* > * What is the possible worst case scenario for updating util_avg, ctx > * switch or TICK? > -- > 2.34.1 > Hmm, after the changes in "sched: cpufreq: Remove magic 1.25 headroom from sugov_apply_dvfs_headroom()", won't sugov_apply_dvfs_headroom() already decay the headroom gracefully in step with the decaying util? I suspect that abruptly killing the headroom entirely could be premature depending on the workload, and lead to util bouncing back up due to the time dilation effect you described in the cover letter. Cheers, Sultan
On 8/20/24 17:35, Qais Yousef wrote: > It means we're being idling or doing less work and are already running > at a higher value. No need to apply any dvfs headroom in this case. > > Signed-off-by: Qais Yousef <qyousef@layalina.io> > --- > kernel/sched/cpufreq_schedutil.c | 11 ++++++++--- > 1 file changed, 8 insertions(+), 3 deletions(-) > > diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c > index 318b09bc4ab1..4a1a8b353d51 100644 > --- a/kernel/sched/cpufreq_schedutil.c > +++ b/kernel/sched/cpufreq_schedutil.c > @@ -9,6 +9,7 @@ > #define IOWAIT_BOOST_MIN (SCHED_CAPACITY_SCALE / 8) > > DEFINE_PER_CPU_READ_MOSTLY(unsigned long, response_time_mult); > +DEFINE_PER_CPU(unsigned long, last_update_util); > > struct sugov_tunables { > struct gov_attr_set attr_set; > @@ -262,15 +263,19 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy, > * Also take into accounting how long tasks have been waiting in runnable but > * !running state. If it is high, it means we need higher DVFS headroom to > * reduce it. > - * > - * XXX: Should we provide headroom when the util is decaying? > */ > static inline unsigned long sugov_apply_dvfs_headroom(unsigned long util, int cpu) > { > - unsigned long update_headroom, waiting_headroom; > + unsigned long update_headroom, waiting_headroom, prev_util; > struct rq *rq = cpu_rq(cpu); > u64 delay; > > + prev_util = per_cpu(last_update_util, cpu); > + per_cpu(last_update_util, cpu) = util; > + > + if (util < prev_util) > + return util; > + > /* > * What is the possible worst case scenario for updating util_avg, ctx > * switch or TICK? Kind of in the same vain as Sultan here, -/+1 util really doesn't tell much, I would be wary to base any special behavior on that. This goes for here but also for the 'periodic'-task detection in [RFC PATCH 09/16] sched/fair: util_est: Take into account periodic tasks in my experience as soon as we leave the world of rt-app workloads behind these aren't stable enough on that granularity.
diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c index 318b09bc4ab1..4a1a8b353d51 100644 --- a/kernel/sched/cpufreq_schedutil.c +++ b/kernel/sched/cpufreq_schedutil.c @@ -9,6 +9,7 @@ #define IOWAIT_BOOST_MIN (SCHED_CAPACITY_SCALE / 8) DEFINE_PER_CPU_READ_MOSTLY(unsigned long, response_time_mult); +DEFINE_PER_CPU(unsigned long, last_update_util); struct sugov_tunables { struct gov_attr_set attr_set; @@ -262,15 +263,19 @@ static unsigned int get_next_freq(struct sugov_policy *sg_policy, * Also take into accounting how long tasks have been waiting in runnable but * !running state. If it is high, it means we need higher DVFS headroom to * reduce it. - * - * XXX: Should we provide headroom when the util is decaying? */ static inline unsigned long sugov_apply_dvfs_headroom(unsigned long util, int cpu) { - unsigned long update_headroom, waiting_headroom; + unsigned long update_headroom, waiting_headroom, prev_util; struct rq *rq = cpu_rq(cpu); u64 delay; + prev_util = per_cpu(last_update_util, cpu); + per_cpu(last_update_util, cpu) = util; + + if (util < prev_util) + return util; + /* * What is the possible worst case scenario for updating util_avg, ctx * switch or TICK?
It means we're being idling or doing less work and are already running at a higher value. No need to apply any dvfs headroom in this case. Signed-off-by: Qais Yousef <qyousef@layalina.io> --- kernel/sched/cpufreq_schedutil.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-)