diff mbox

sched: update blocked load of idle cpus

Message ID 1435129872-22915-1-git-send-email-vincent.guittot@linaro.org
State New
Headers show

Commit Message

Vincent Guittot June 24, 2015, 7:11 a.m. UTC
The load and the util of idle cpus must be updated periodically in order to
decay the blocked part.

If CONFIG_FAIR_GROUP_SCHED is not set, the load and util of idle cpus
are not decayed and stay at the values set before becoming idle.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
---
Hi Yuyang,

While testing your patchset without CONFIG_FAIR_GROUP_SCHED, i have noticed
that the load of idle cpus stays sometimes to an high value whereas they were
not used for a while because we are not decaying the blocked load.
Futhermore, the peridodic load balance was not pulling tasks onto some idle
cpus because their load stayed high.

This patchset fixes the issue. 

Regards,
Vincent

 kernel/sched/fair.c | 11 +++++++++++
 1 file changed, 11 insertions(+)
diff mbox

Patch

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c5f18d9..665cc4b 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5864,6 +5864,17 @@  static unsigned long task_h_load(struct task_struct *p)
 #else
 static inline void update_blocked_averages(int cpu)
 {
+	struct rq *rq = cpu_rq(cpu);
+	struct cfs_rq *cfs_rq = &rq->cfs;
+	unsigned long flags;
+
+	raw_spin_lock_irqsave(&rq->lock, flags);
+	update_rq_clock(rq);
+
+	update_cfs_rq_load_avg(cfs_rq_clock_task(cfs_rq), cfs_rq))
+
+	raw_spin_unlock_irqrestore(&rq->lock, flags);
+
 }
 
 static unsigned long task_h_load(struct task_struct *p)