Message ID | 1535728975-22799-1-git-send-email-vincent.guittot@linaro.org |
---|---|
State | New |
Headers | show |
Series | [v3] sched/pelt: fix update_blocked_averages() for dl and rt | expand |
On 08/31/2018 08:22 AM, Vincent Guittot wrote: > update_blocked_averages() is called to periodiccally decay the stalled load > of idle CPUs and to sync all loads before running load balance. > > When cfs rq is idle, it trigs a load balance during pick_next_task_fair() > in order to potentially pull tasks and to use this newly idle CPU. This > load balance happens whereas prev task from another class has not been put > and its utilization updated yet. This may lead to wrongly account running > time as idle time for rt or dl classes. > > Test that no rt or dl task is running when updating their utilization in > update_blocked_averages(). Shouldn't this be 's/that no/if an' ? You still update the utilization of the rt or dl task if they are running (accrue + decay) instead of only decay. Similar to the 'cfs_rq->curr != NULL' in __update_load_avg_cfs_rq(). > We still update rt and dl utilization instead of simply skipping them to > make sure that all metrics are synced when used during load balance. I assume this sentence is indirectly saying this. > > Fixes: 371bf4273269 ("sched/rt: Add rt_rq utilization tracking") > Fixes: 3727e0e16340 ("sched/dl: Add dl_rq utilization tracking") > Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> > --- > > -V3 > - move rq->curr dereference under the rq->lock > > -V2 > - Add missing fixes tags > - apply fix to other version of update_blocked_averages > > kernel/sched/fair.c | 14 ++++++++++---- > 1 file changed, 10 insertions(+), 4 deletions(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 309c93f..53bbcd4 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -7262,6 +7262,7 @@ static void update_blocked_averages(int cpu) > { > struct rq *rq = cpu_rq(cpu); > struct cfs_rq *cfs_rq, *pos; > + const struct sched_class *curr_class; > struct rq_flags rf; > bool done = true; > > @@ -7298,8 +7299,10 @@ static void update_blocked_averages(int cpu) > if (cfs_rq_has_blocked(cfs_rq)) > done = false; > } > - update_rt_rq_load_avg(rq_clock_task(rq), rq, 0); > - update_dl_rq_load_avg(rq_clock_task(rq), rq, 0); > + > + curr_class = rq->curr->sched_class; > + update_rt_rq_load_avg(rq_clock_task(rq), rq, curr_class == &rt_sched_class); > + update_dl_rq_load_avg(rq_clock_task(rq), rq, curr_class == &dl_sched_class); > update_irq_load_avg(rq, 0); > /* Don't need periodic decay once load/util_avg are null */ > if (others_have_blocked(rq)) > @@ -7364,13 +7367,16 @@ static inline void update_blocked_averages(int cpu) > { > struct rq *rq = cpu_rq(cpu); > struct cfs_rq *cfs_rq = &rq->cfs; > + const struct sched_class *curr_class; > struct rq_flags rf; > > rq_lock_irqsave(rq, &rf); > update_rq_clock(rq); > update_cfs_rq_load_avg(cfs_rq_clock_task(cfs_rq), cfs_rq); > - update_rt_rq_load_avg(rq_clock_task(rq), rq, 0); > - update_dl_rq_load_avg(rq_clock_task(rq), rq, 0); > + > + curr_class = rq->curr->sched_class; > + update_rt_rq_load_avg(rq_clock_task(rq), rq, curr_class == &rt_sched_class); > + update_dl_rq_load_avg(rq_clock_task(rq), rq, curr_class == &dl_sched_class); > update_irq_load_avg(rq, 0); > #ifdef CONFIG_NO_HZ_COMMON > rq->last_blocked_load_update_tick = jiffies; >
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 309c93f..53bbcd4 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -7262,6 +7262,7 @@ static void update_blocked_averages(int cpu) { struct rq *rq = cpu_rq(cpu); struct cfs_rq *cfs_rq, *pos; + const struct sched_class *curr_class; struct rq_flags rf; bool done = true; @@ -7298,8 +7299,10 @@ static void update_blocked_averages(int cpu) if (cfs_rq_has_blocked(cfs_rq)) done = false; } - update_rt_rq_load_avg(rq_clock_task(rq), rq, 0); - update_dl_rq_load_avg(rq_clock_task(rq), rq, 0); + + curr_class = rq->curr->sched_class; + update_rt_rq_load_avg(rq_clock_task(rq), rq, curr_class == &rt_sched_class); + update_dl_rq_load_avg(rq_clock_task(rq), rq, curr_class == &dl_sched_class); update_irq_load_avg(rq, 0); /* Don't need periodic decay once load/util_avg are null */ if (others_have_blocked(rq)) @@ -7364,13 +7367,16 @@ static inline void update_blocked_averages(int cpu) { struct rq *rq = cpu_rq(cpu); struct cfs_rq *cfs_rq = &rq->cfs; + const struct sched_class *curr_class; struct rq_flags rf; rq_lock_irqsave(rq, &rf); update_rq_clock(rq); update_cfs_rq_load_avg(cfs_rq_clock_task(cfs_rq), cfs_rq); - update_rt_rq_load_avg(rq_clock_task(rq), rq, 0); - update_dl_rq_load_avg(rq_clock_task(rq), rq, 0); + + curr_class = rq->curr->sched_class; + update_rt_rq_load_avg(rq_clock_task(rq), rq, curr_class == &rt_sched_class); + update_dl_rq_load_avg(rq_clock_task(rq), rq, curr_class == &dl_sched_class); update_irq_load_avg(rq, 0); #ifdef CONFIG_NO_HZ_COMMON rq->last_blocked_load_update_tick = jiffies;
update_blocked_averages() is called to periodiccally decay the stalled load of idle CPUs and to sync all loads before running load balance. When cfs rq is idle, it trigs a load balance during pick_next_task_fair() in order to potentially pull tasks and to use this newly idle CPU. This load balance happens whereas prev task from another class has not been put and its utilization updated yet. This may lead to wrongly account running time as idle time for rt or dl classes. Test that no rt or dl task is running when updating their utilization in update_blocked_averages(). We still update rt and dl utilization instead of simply skipping them to make sure that all metrics are synced when used during load balance. Fixes: 371bf4273269 ("sched/rt: Add rt_rq utilization tracking") Fixes: 3727e0e16340 ("sched/dl: Add dl_rq utilization tracking") Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> --- -V3 - move rq->curr dereference under the rq->lock -V2 - Add missing fixes tags - apply fix to other version of update_blocked_averages kernel/sched/fair.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) -- 2.7.4