diff mbox series

[v4,1/3] sched/fair: fix rounding issue for asym packing

Message ID 1547747049-6320-2-git-send-email-vincent.guittot@linaro.org
State New
Headers show
Series sched/fair: some fixes for asym_packing | expand

Commit Message

Vincent Guittot Jan. 17, 2019, 5:44 p.m. UTC
When check_asym_packing() is triggered, the imbalance is set to :
  busiest_stat.avg_load * busiest_stat.group_capacity / SCHED_CAPACITY_SCALE
But busiest_stat.avg_load equals :
  sgs->group_load *SCHED_CAPACITY_SCALE / sgs->group_capacity
These divisions can generate a rounding that will make imbalance slightly
lower than the weighted load of the cfs_rq.
But this is enough to skip the rq in find_busiest_queue and prevents asym
migration to happen.

Directly set imbalance to busiest's sgs->group_load to remove the rounding.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>

Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>

---
 kernel/sched/fair.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

-- 
2.7.4
diff mbox series

Patch

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index ca46964..1e4bed4 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8476,9 +8476,7 @@  static int check_asym_packing(struct lb_env *env, struct sd_lb_stats *sds)
 	if (sched_asym_prefer(busiest_cpu, env->dst_cpu))
 		return 0;
 
-	env->imbalance = DIV_ROUND_CLOSEST(
-		sds->busiest_stat.avg_load * sds->busiest_stat.group_capacity,
-		SCHED_CAPACITY_SCALE);
+	env->imbalance = sds->busiest_stat.group_load;
 
 	return 1;
 }