diff mbox series

[v3,08/10] sched/fair: use utilization to select misfit task

Message ID 1568878421-12301-9-git-send-email-vincent.guittot@linaro.org
State New
Headers show
Series sched/fair: rework the CFS load balance | expand

Commit Message

Vincent Guittot Sept. 19, 2019, 7:33 a.m. UTC
utilization is used to detect a misfit task but the load is then used to
select the task on the CPU which can lead to select a small task with
high weight instead of the task that triggered the misfit migration.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>

Acked-by: Valentin Schneider <valentin.schneider@arm.com>

---
 kernel/sched/fair.c | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)

-- 
2.7.4

Comments

Valentin Schneider Oct. 1, 2019, 5:12 p.m. UTC | #1
On 19/09/2019 08:33, Vincent Guittot wrote:
> utilization is used to detect a misfit task but the load is then used to

> select the task on the CPU which can lead to select a small task with

> high weight instead of the task that triggered the misfit migration.

> 

> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>

> Acked-by: Valentin Schneider <valentin.schneider@arm.com>

> ---

>  kernel/sched/fair.c | 10 ++--------

>  1 file changed, 2 insertions(+), 8 deletions(-)

> 

> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c

> index a7c8ee6..acca869 100644

> --- a/kernel/sched/fair.c

> +++ b/kernel/sched/fair.c

> @@ -7429,14 +7429,8 @@ static int detach_tasks(struct lb_env *env)

>  			break;

>  

>  		case migrate_misfit:

> -			load = task_h_load(p);

> -

> -			/*

> -			 * utilization of misfit task might decrease a bit

> -			 * since it has been recorded. Be conservative in the

> -			 * condition.

> -			 */

> -			if (load < env->imbalance)

> +			/* This is not a misfit task */

> +			if (task_fits_capacity(p, capacity_of(env->src_cpu)))

>  				goto next;

>  

>  			env->imbalance = 0;

> 


Following my comment in [1], if you can't squash that in patch 04 then
perhaps you could add that to this change:

---
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1fac444a4831..d09ce304161d 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8343,7 +8343,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
 	if (busiest->group_type == group_misfit_task) {
 		/* Set imbalance to allow misfit task to be balanced. */
 		env->balance_type = migrate_misfit;
-		env->imbalance = busiest->group_misfit_task_load;
+		env->imbalance = 1;
 		return;
 	}
 
---

Reason being we don't care about the load (anymore), we just want a nonzero
imbalance value, so might as well assign something static.

[1]: https://lore.kernel.org/r/74bb33d7-3ba4-aabe-c7a2-3865d5759281@arm.com
diff mbox series

Patch

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index a7c8ee6..acca869 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7429,14 +7429,8 @@  static int detach_tasks(struct lb_env *env)
 			break;
 
 		case migrate_misfit:
-			load = task_h_load(p);
-
-			/*
-			 * utilization of misfit task might decrease a bit
-			 * since it has been recorded. Be conservative in the
-			 * condition.
-			 */
-			if (load < env->imbalance)
+			/* This is not a misfit task */
+			if (task_fits_capacity(p, capacity_of(env->src_cpu)))
 				goto next;
 
 			env->imbalance = 0;