diff mbox series

[v2,2/3] sched/fair: trigger asym_packing during idle load balance

Message ID 1544803317-7610-3-git-send-email-vincent.guittot@linaro.org
State Superseded
Headers show
Series sched/fair: some fixes for asym_packing | expand

Commit Message

Vincent Guittot Dec. 14, 2018, 4:01 p.m. UTC
newly idle load balance is not always triggered when a cpu becomes idle.
This prevent the scheduler to get a chance to migrate task for asym packing.
Enable active migration because of asym packing during idle load balance too.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>

---
 kernel/sched/fair.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

-- 
2.7.4

Comments

Valentin Schneider Dec. 17, 2018, 4:59 p.m. UTC | #1
Hi Vincent,

On 14/12/2018 16:01, Vincent Guittot wrote:
> newly idle load balance is not always triggered when a cpu becomes idle.

> This prevent the scheduler to get a chance to migrate task for asym packing.

> Enable active migration because of asym packing during idle load balance too.

> 

> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>

> ---

>  kernel/sched/fair.c | 2 +-

>  1 file changed, 1 insertion(+), 1 deletion(-)

> 

> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c

> index c215f7a..9591e7a 100644

> --- a/kernel/sched/fair.c

> +++ b/kernel/sched/fair.c

> @@ -8861,7 +8861,7 @@ static int need_active_balance(struct lb_env *env)

>  {

>  	struct sched_domain *sd = env->sd;

>  

> -	if (env->idle == CPU_NEWLY_IDLE) {

> +	if (env->idle != CPU_NOT_IDLE) {

>  

>  		/*

>  		 * ASYM_PACKING needs to force migrate tasks from busy but

> 


That change looks fine. However, you're mentioning newidle load_balance()
not being triggered - you'd want to set root_domain->overload for any
newidle pull to happen, probably with something like this:

-----8<-----
@@ -8398,6 +8408,9 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
                sg = sg->next;
        } while (sg != env->sd->groups);
 
+       if (check_asym_packing(env, sds))
+               sg_status |= SG_OVERLOAD;
+
 #ifdef CONFIG_NO_HZ_COMMON
        if ((env->flags & LBF_NOHZ_AGAIN) &&
            cpumask_subset(nohz.idle_cpus_mask, sched_domain_span(env->sd))) {
----->8-----

It's similar to what is done for misfit, although that's yet another
'twisted' use of that flag which we might want to rename (I suggested
something like 'need_idle_balance' a while back but it wasn't really
popular).
Vincent Guittot Dec. 18, 2018, 8:17 a.m. UTC | #2
On Mon, 17 Dec 2018 at 17:59, Valentin Schneider
<valentin.schneider@arm.com> wrote:
>

> Hi Vincent,

>

> On 14/12/2018 16:01, Vincent Guittot wrote:

> > newly idle load balance is not always triggered when a cpu becomes idle.

> > This prevent the scheduler to get a chance to migrate task for asym packing.

> > Enable active migration because of asym packing during idle load balance too.

> >

> > Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>

> > ---

> >  kernel/sched/fair.c | 2 +-

> >  1 file changed, 1 insertion(+), 1 deletion(-)

> >

> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c

> > index c215f7a..9591e7a 100644

> > --- a/kernel/sched/fair.c

> > +++ b/kernel/sched/fair.c

> > @@ -8861,7 +8861,7 @@ static int need_active_balance(struct lb_env *env)

> >  {

> >       struct sched_domain *sd = env->sd;

> >

> > -     if (env->idle == CPU_NEWLY_IDLE) {

> > +     if (env->idle != CPU_NOT_IDLE) {

> >

> >               /*

> >                * ASYM_PACKING needs to force migrate tasks from busy but

> >

>

> That change looks fine. However, you're mentioning newidle load_balance()

> not being triggered - you'd want to set root_domain->overload for any

> newidle pull to happen, probably with something like this:


It's not needed in this case because the dst cpu is already the target
cpu and the migration will happen during this idle load balance.
Setting root_domain->overload is useful only if you want another cpu
to pull the task during another coming load_balance (newly or normal
idle ones) which is not the case here.

>

> -----8<-----

> @@ -8398,6 +8408,9 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd

>                 sg = sg->next;

>         } while (sg != env->sd->groups);

>

> +       if (check_asym_packing(env, sds))

> +               sg_status |= SG_OVERLOAD;

> +

>  #ifdef CONFIG_NO_HZ_COMMON

>         if ((env->flags & LBF_NOHZ_AGAIN) &&

>             cpumask_subset(nohz.idle_cpus_mask, sched_domain_span(env->sd))) {

> ----->8-----

>

> It's similar to what is done for misfit, although that's yet another

> 'twisted' use of that flag which we might want to rename (I suggested

> something like 'need_idle_balance' a while back but it wasn't really

> popular).
Valentin Schneider Dec. 18, 2018, noon UTC | #3
On 18/12/2018 08:17, Vincent Guittot wrote:
[...]
>> That change looks fine. However, you're mentioning newidle load_balance()

>> not being triggered - you'd want to set root_domain->overload for any

>> newidle pull to happen, probably with something like this:

> 

> It's not needed in this case because the dst cpu is already the target

> cpu and the migration will happen during this idle load balance.

> Setting root_domain->overload is useful only if you want another cpu

> to pull the task during another coming load_balance (newly or normal

> idle ones) which is not the case here.

> 


Right, I got the check wrong. The thing is, if you want to go through a
newidle balance, you need to have that overload flag raised beforehand.

I was about to draw a diagram but I kinda already did in the log of
757ffdd705ee ("sched/fair: Set rq->rd->overload when misfit").

So you would first need to raise the flag, e.g. when updating the lb stats
on a low-priority CPU, and when a higher-priority CPU goes newly idle,
the flag is raised, gates to idle_balance() are opened, and a newidle pull
from low-priority to high-priority can happen.

>>

>> -----8<-----

>> @@ -8398,6 +8408,9 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd

>>                 sg = sg->next;

>>         } while (sg != env->sd->groups);

>>

>> +       if (check_asym_packing(env, sds))

>> +               sg_status |= SG_OVERLOAD;

>> +

>>  #ifdef CONFIG_NO_HZ_COMMON

>>         if ((env->flags & LBF_NOHZ_AGAIN) &&

>>             cpumask_subset(nohz.idle_cpus_mask, sched_domain_span(env->sd))) {

>> ----->8-----

>>

>> It's similar to what is done for misfit, although that's yet another

>> 'twisted' use of that flag which we might want to rename (I suggested

>> something like 'need_idle_balance' a while back but it wasn't really

>> popular).
diff mbox series

Patch

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c215f7a..9591e7a 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8861,7 +8861,7 @@  static int need_active_balance(struct lb_env *env)
 {
 	struct sched_domain *sd = env->sd;
 
-	if (env->idle == CPU_NEWLY_IDLE) {
+	if (env->idle != CPU_NOT_IDLE) {
 
 		/*
 		 * ASYM_PACKING needs to force migrate tasks from busy but