diff mbox series

[v4,06/11] sched/fair: use load instead of runnable load in load_balance

Message ID 1571405198-27570-7-git-send-email-vincent.guittot@linaro.org
State Accepted
Commit b0fb1eb4f04ae4768231b9731efb1134e22053a4
Headers show
Series sched/fair: rework the CFS load balance | expand

Commit Message

Vincent Guittot Oct. 18, 2019, 1:26 p.m. UTC
runnable load has been introduced to take into account the case
where blocked load biases the load balance decision which was selecting
underutilized group with huge blocked load whereas other groups were
overloaded.

The load is now only used when groups are overloaded. In this case,
it's worth being conservative and taking into account the sleeping
tasks that might wakeup on the cpu.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>

---
 kernel/sched/fair.c | 24 ++++++++++++++----------
 1 file changed, 14 insertions(+), 10 deletions(-)

-- 
2.7.4

Comments

Mel Gorman Oct. 30, 2019, 3:58 p.m. UTC | #1
On Fri, Oct 18, 2019 at 03:26:33PM +0200, Vincent Guittot wrote:
> runnable load has been introduced to take into account the case

> where blocked load biases the load balance decision which was selecting

> underutilized group with huge blocked load whereas other groups were

> overloaded.

> 

> The load is now only used when groups are overloaded. In this case,

> it's worth being conservative and taking into account the sleeping

> tasks that might wakeup on the cpu.

> 

> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>


Hmm.... ok. Superficially I get what you're doing but worry slightly
about groups that have lots of tasks that are frequently idling on short
periods of IO.

Unfortuntely when I queued this series for testing I did not queue a load
that idles rapidly for short durations that would highlight problems in
that area.

I cannot convince myself it's ok enough for an ack but I have no reason
to complain either.

-- 
Mel Gorman
SUSE Labs
diff mbox series

Patch

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e09fe12b..9ac2264 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5385,6 +5385,11 @@  static unsigned long cpu_runnable_load(struct rq *rq)
 	return cfs_rq_runnable_load_avg(&rq->cfs);
 }
 
+static unsigned long cpu_load(struct rq *rq)
+{
+	return cfs_rq_load_avg(&rq->cfs);
+}
+
 static unsigned long capacity_of(int cpu)
 {
 	return cpu_rq(cpu)->cpu_capacity;
@@ -8059,7 +8064,7 @@  static inline void update_sg_lb_stats(struct lb_env *env,
 		if ((env->flags & LBF_NOHZ_STATS) && update_nohz_stats(rq, false))
 			env->flags |= LBF_NOHZ_AGAIN;
 
-		sgs->group_load += cpu_runnable_load(rq);
+		sgs->group_load += cpu_load(rq);
 		sgs->group_util += cpu_util(i);
 		sgs->sum_h_nr_running += rq->cfs.h_nr_running;
 
@@ -8517,7 +8522,7 @@  static struct sched_group *find_busiest_group(struct lb_env *env)
 	init_sd_lb_stats(&sds);
 
 	/*
-	 * Compute the various statistics relavent for load balancing at
+	 * Compute the various statistics relevant for load balancing at
 	 * this level.
 	 */
 	update_sd_lb_stats(env, &sds);
@@ -8677,11 +8682,10 @@  static struct rq *find_busiest_queue(struct lb_env *env,
 		switch (env->migration_type) {
 		case migrate_load:
 			/*
-			 * When comparing with load imbalance, use
-			 * cpu_runnable_load() which is not scaled with the CPU
-			 * capacity.
+			 * When comparing with load imbalance, use cpu_load()
+			 * which is not scaled with the CPU capacity.
 			 */
-			load = cpu_runnable_load(rq);
+			load = cpu_load(rq);
 
 			if (nr_running == 1 && load > env->imbalance &&
 			    !check_cpu_capacity(rq, env->sd))
@@ -8689,10 +8693,10 @@  static struct rq *find_busiest_queue(struct lb_env *env,
 
 			/*
 			 * For the load comparisons with the other CPU's,
-			 * consider the cpu_runnable_load() scaled with the CPU
-			 * capacity, so that the load can be moved away from
-			 * the CPU that is potentially running at a lower
-			 * capacity.
+			 * consider the cpu_load() scaled with the CPU
+			 * capacity, so that the load can be moved away
+			 * from the CPU that is potentially running at a
+			 * lower capacity.
 			 *
 			 * Thus we're looking for max(load_i / capacity_i),
 			 * crosswise multiplication to rid ourselves of the