diff mbox series

[v3,1/3] sched/fair: fix rounding issue for asym packing

Message ID 1545292547-18770-2-git-send-email-vincent.guittot@linaro.org
State Superseded
Headers show
Series sched/fair: some fixes for asym_packing | expand

Commit Message

Vincent Guittot Dec. 20, 2018, 7:55 a.m. UTC
When check_asym_packing() is triggered, the imbalance is set to :
  busiest_stat.avg_load * busiest_stat.group_capacity / SCHED_CAPACITY_SCALE
But busiest_stat.avg_load equals
  sgs->group_load *SCHED_CAPACITY_SCALE / sgs->group_capacity
These divisions can generate a rounding that will make imbalance slightly
lower than the weighted load of the cfs_rq.
But this is enough to skip the rq in find_busiest_queue and prevents asym
migration to happen.

Directly set imbalance to sgs->group_load to remove the rounding.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>

---
 kernel/sched/fair.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

-- 
2.7.4

Comments

Valentin Schneider Dec. 20, 2018, 11:16 a.m. UTC | #1
On 20/12/2018 07:55, Vincent Guittot wrote:
> When check_asym_packing() is triggered, the imbalance is set to :

>   busiest_stat.avg_load * busiest_stat.group_capacity / SCHED_CAPACITY_SCALE

> But busiest_stat.avg_load equals

>   sgs->group_load *SCHED_CAPACITY_SCALE / sgs->group_capacity

> These divisions can generate a rounding that will make imbalance slightly

> lower than the weighted load of the cfs_rq.

> But this is enough to skip the rq in find_busiest_queue and prevents asym

> migration to happen.

> 

> Directly set imbalance to sgs->group_load to remove the rounding.

                            ^^^^^^^^^^^^^^^
I see where that's coming from, but using 'sgs' here is tad confusing since
'sds->busiest_stat' is what's actually used.

Maybe just something like 'the busiest's sgs->group_load' would be good
enough to make things explicit.

> 

> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>

> ---

>  kernel/sched/fair.c | 4 +---

>  1 file changed, 1 insertion(+), 3 deletions(-)

> 

> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c

> index ca46964..9b31247 100644

> --- a/kernel/sched/fair.c

> +++ b/kernel/sched/fair.c

> @@ -8476,9 +8476,7 @@ static int check_asym_packing(struct lb_env *env, struct sd_lb_stats *sds)

>  	if (sched_asym_prefer(busiest_cpu, env->dst_cpu))

>  		return 0;

>  

> -	env->imbalance = DIV_ROUND_CLOSEST(

> -		sds->busiest_stat.avg_load * sds->busiest_stat.group_capacity,

> -		SCHED_CAPACITY_SCALE);

> +	env->imbalance = sds->busiest_stat.avg_load;


That should be group_load, not avg_load. With that fixed:

Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>


>  

>  	return 1;

>  }

>
Vincent Guittot Dec. 20, 2018, 2:22 p.m. UTC | #2
On Thu, 20 Dec 2018 at 12:16, Valentin Schneider
<valentin.schneider@arm.com> wrote:
>

> On 20/12/2018 07:55, Vincent Guittot wrote:

> > When check_asym_packing() is triggered, the imbalance is set to :

> >   busiest_stat.avg_load * busiest_stat.group_capacity / SCHED_CAPACITY_SCALE

> > But busiest_stat.avg_load equals

> >   sgs->group_load *SCHED_CAPACITY_SCALE / sgs->group_capacity

> > These divisions can generate a rounding that will make imbalance slightly

> > lower than the weighted load of the cfs_rq.

> > But this is enough to skip the rq in find_busiest_queue and prevents asym

> > migration to happen.

> >

> > Directly set imbalance to sgs->group_load to remove the rounding.

>                             ^^^^^^^^^^^^^^^

> I see where that's coming from, but using 'sgs' here is tad confusing since

> 'sds->busiest_stat' is what's actually used.

>

> Maybe just something like 'the busiest's sgs->group_load' would be good

> enough to make things explicit.

>

> >

> > Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>

> > ---

> >  kernel/sched/fair.c | 4 +---

> >  1 file changed, 1 insertion(+), 3 deletions(-)

> >

> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c

> > index ca46964..9b31247 100644

> > --- a/kernel/sched/fair.c

> > +++ b/kernel/sched/fair.c

> > @@ -8476,9 +8476,7 @@ static int check_asym_packing(struct lb_env *env, struct sd_lb_stats *sds)

> >       if (sched_asym_prefer(busiest_cpu, env->dst_cpu))

> >               return 0;

> >

> > -     env->imbalance = DIV_ROUND_CLOSEST(

> > -             sds->busiest_stat.avg_load * sds->busiest_stat.group_capacity,

> > -             SCHED_CAPACITY_SCALE);

> > +     env->imbalance = sds->busiest_stat.avg_load;

>

> That should be group_load, not avg_load. With that fixed:


ah yes... too quick at changing it

>

> Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>

>

> >

> >       return 1;

> >  }

> >
diff mbox series

Patch

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index ca46964..9b31247 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8476,9 +8476,7 @@  static int check_asym_packing(struct lb_env *env, struct sd_lb_stats *sds)
 	if (sched_asym_prefer(busiest_cpu, env->dst_cpu))
 		return 0;
 
-	env->imbalance = DIV_ROUND_CLOSEST(
-		sds->busiest_stat.avg_load * sds->busiest_stat.group_capacity,
-		SCHED_CAPACITY_SCALE);
+	env->imbalance = sds->busiest_stat.avg_load;
 
 	return 1;
 }