Message ID | 1535727379-17133-1-git-send-email-vincent.guittot@linaro.org |
---|---|
State | New |
Headers | show |
Series | [v2] sched/pelt: fix update_blocked_averages() for dl and rt | expand |
On Fri, Aug 31, 2018 at 04:56:19PM +0200, Vincent Guittot wrote: > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 309c93f..bc1de21 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -7262,6 +7262,7 @@ static void update_blocked_averages(int cpu) > { > struct rq *rq = cpu_rq(cpu); > struct cfs_rq *cfs_rq, *pos; > + const struct sched_class *curr_class = rq->curr->sched_class; > struct rq_flags rf; > bool done = true; Can you do me a v3 where you move that rq->curr dereference under the rq->lock? I _think_ it is actually OK, but it is really dodgy. Moving it under rq->lock makes it obvious correct. > @@ -7298,8 +7299,8 @@ static void update_blocked_averages(int cpu) > if (cfs_rq_has_blocked(cfs_rq)) > done = false; > } > - update_rt_rq_load_avg(rq_clock_task(rq), rq, 0); > - update_dl_rq_load_avg(rq_clock_task(rq), rq, 0); > + update_rt_rq_load_avg(rq_clock_task(rq), rq, curr_class == &rt_sched_class); > + update_dl_rq_load_avg(rq_clock_task(rq), rq, curr_class == &dl_sched_class); > update_irq_load_avg(rq, 0); > /* Don't need periodic decay once load/util_avg are null */ > if (others_have_blocked(rq))
On Fri, 31 Aug 2018 at 17:07, Peter Zijlstra <peterz@infradead.org> wrote: > > On Fri, Aug 31, 2018 at 04:56:19PM +0200, Vincent Guittot wrote: > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > > index 309c93f..bc1de21 100644 > > --- a/kernel/sched/fair.c > > +++ b/kernel/sched/fair.c > > @@ -7262,6 +7262,7 @@ static void update_blocked_averages(int cpu) > > { > > struct rq *rq = cpu_rq(cpu); > > struct cfs_rq *cfs_rq, *pos; > > + const struct sched_class *curr_class = rq->curr->sched_class; > > struct rq_flags rf; > > bool done = true; > > Can you do me a v3 where you move that rq->curr dereference under the > rq->lock? Yes. sorry for all these versions > > I _think_ it is actually OK, but it is really dodgy. Moving it under > rq->lock makes it obvious correct. > > > @@ -7298,8 +7299,8 @@ static void update_blocked_averages(int cpu) > > if (cfs_rq_has_blocked(cfs_rq)) > > done = false; > > } > > - update_rt_rq_load_avg(rq_clock_task(rq), rq, 0); > > - update_dl_rq_load_avg(rq_clock_task(rq), rq, 0); > > + update_rt_rq_load_avg(rq_clock_task(rq), rq, curr_class == &rt_sched_class); > > + update_dl_rq_load_avg(rq_clock_task(rq), rq, curr_class == &dl_sched_class); > > update_irq_load_avg(rq, 0); > > /* Don't need periodic decay once load/util_avg are null */ > > if (others_have_blocked(rq))
On Fri, Aug 31, 2018 at 05:07:21PM +0200, Peter Zijlstra wrote: > On Fri, Aug 31, 2018 at 04:56:19PM +0200, Vincent Guittot wrote: > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > > index 309c93f..bc1de21 100644 > > --- a/kernel/sched/fair.c > > +++ b/kernel/sched/fair.c > > @@ -7262,6 +7262,7 @@ static void update_blocked_averages(int cpu) > > { > > struct rq *rq = cpu_rq(cpu); > > struct cfs_rq *cfs_rq, *pos; > > + const struct sched_class *curr_class = rq->curr->sched_class; > > struct rq_flags rf; > > bool done = true; > > Can you do me a v3 where you move that rq->curr dereference under the > rq->lock? > > I _think_ it is actually OK, but it is really dodgy. Moving it under > rq->lock makes it obvious correct. Ah, it is not correct. I only checked to see if preemption was disabled and if we're calling this on the local CPU (which I think is true), which would guarantee rq->curr's existence. But that is not sufficient to make rq->curr->sched_class stable. > > @@ -7298,8 +7299,8 @@ static void update_blocked_averages(int cpu) > > if (cfs_rq_has_blocked(cfs_rq)) > > done = false; > > } > > - update_rt_rq_load_avg(rq_clock_task(rq), rq, 0); > > - update_dl_rq_load_avg(rq_clock_task(rq), rq, 0); > > + update_rt_rq_load_avg(rq_clock_task(rq), rq, curr_class == &rt_sched_class); > > + update_dl_rq_load_avg(rq_clock_task(rq), rq, curr_class == &dl_sched_class); > > update_irq_load_avg(rq, 0); > > /* Don't need periodic decay once load/util_avg are null */ > > if (others_have_blocked(rq))
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 309c93f..bc1de21 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -7262,6 +7262,7 @@ static void update_blocked_averages(int cpu) { struct rq *rq = cpu_rq(cpu); struct cfs_rq *cfs_rq, *pos; + const struct sched_class *curr_class = rq->curr->sched_class; struct rq_flags rf; bool done = true; @@ -7298,8 +7299,8 @@ static void update_blocked_averages(int cpu) if (cfs_rq_has_blocked(cfs_rq)) done = false; } - update_rt_rq_load_avg(rq_clock_task(rq), rq, 0); - update_dl_rq_load_avg(rq_clock_task(rq), rq, 0); + update_rt_rq_load_avg(rq_clock_task(rq), rq, curr_class == &rt_sched_class); + update_dl_rq_load_avg(rq_clock_task(rq), rq, curr_class == &dl_sched_class); update_irq_load_avg(rq, 0); /* Don't need periodic decay once load/util_avg are null */ if (others_have_blocked(rq)) @@ -7364,13 +7365,14 @@ static inline void update_blocked_averages(int cpu) { struct rq *rq = cpu_rq(cpu); struct cfs_rq *cfs_rq = &rq->cfs; + const struct sched_class *curr_class = rq->curr->sched_class; struct rq_flags rf; rq_lock_irqsave(rq, &rf); update_rq_clock(rq); update_cfs_rq_load_avg(cfs_rq_clock_task(cfs_rq), cfs_rq); - update_rt_rq_load_avg(rq_clock_task(rq), rq, 0); - update_dl_rq_load_avg(rq_clock_task(rq), rq, 0); + update_rt_rq_load_avg(rq_clock_task(rq), rq, curr_class == &rt_sched_class); + update_dl_rq_load_avg(rq_clock_task(rq), rq, curr_class == &dl_sched_class); update_irq_load_avg(rq, 0); #ifdef CONFIG_NO_HZ_COMMON rq->last_blocked_load_update_tick = jiffies;
update_blocked_averages() is called to periodiccally decay the stalled load of idle CPUs and to sync all loads before running load balance. When cfs rq is idle, it trigs a load balance during pick_next_task_fair() in order to potentially pull tasks and to use this newly idle CPU. This load balance happens whereas prev task from another class has not been put and its utilization updated yet. This may lead to wrongly account running time as idle time for rt or dl classes. Test that no rt or dl task is running when updating their utilization in update_blocked_averages(). We still update rt and dl utilization instead of simply skipping them to make sure that all metrics are synced when used during load balance. Fixes: 371bf4273269 ("sched/rt: Add rt_rq utilization tracking") Fixes: 3727e0e16340 ("sched/dl: Add dl_rq utilization tracking") Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> --- -V2 - Add missing fixes tags - apply fix to other version of update_blocked_averages kernel/sched/fair.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) -- 2.7.4