mbox series

[v2,0/8] sched/fair: rework the CFS load balance

Message ID 1564670424-26023-1-git-send-email-vincent.guittot@linaro.org
Headers show
Series sched/fair: rework the CFS load balance | expand

Message

Vincent Guittot Aug. 1, 2019, 2:40 p.m. UTC
Several wrong task placement have been raised with the current load
balance algorithm but their fixes are not always straight forward and
end up with using biased values to force migrations. A cleanup and rework
of the load balance will help to handle such UCs and enable to fine grain
the behavior of the scheduler for other cases.

Patch 1 has already been sent separatly and only consolidate asym policy
in one place and help the review of the changes in load_balance.

Patch 2 renames the sum of h_nr_running in stats.

Patch 3 removes meaningless imbalance computation to make review of
patch 4 easier.

Patch 4 reworks load_balance algorithm and fixes some wrong task placement
but try to stay conservative.

Patch 5 add the sum of nr_running to monitor non cfs tasks and take that
into account when pulling tasks.

Patch 6 replaces runnable_load by load now that the metrics is only used
when overloaded.

Patch 7 improves the spread of tasks at the 1st scheduling level.

Patch 8 uses utilization instead of load in all steps of misfit task
path.

Some benchmarks results based on 8 iterations of each tests:
- small arm64 dual quad cores system

           tip/sched/core        w/ this patchset    improvement
schedpipe      54981 +/-0.36%        55459 +/-0.31%   (+0.97%)

hackbench
1 groups       0.906 +/-2.34%        0.906 +/-2.88%   (+0.06%)

- large arm64 2 nodes / 224 cores system

           tip/sched/core        w/ this patchset    improvement
schedpipe     125665 +/-0.61%       125455 +/-0.62%   (-0.17%)

hackbench -l (256000/#grp) -g #grp
1 groups      15.263 +/-3.53%       13.776 +/-3.30%   (+9.74%)
4 groups       5.852 +/-0.57%        5.340 +/-8.03%   (+8.75%)
16 groups      3.097 +/-1.08%        3.246 +/-0.97%   (-4.81%)
32 groups      2.882 +/-1.04%        2.845 +/-1.02%   (+1.29%)
64 groups      2.809 +/-1.30%        2.712 +/-1.17%   (+3.45%)
128 groups     3.129 +/-9.74%        2.813 +/-6.22%   (+9.11%)
256 groups     3.559 +/-11.07%       3.020 +/-1.75%  (+15.15%)

dbench
1 groups     330.897 +/-0.27%      330.612 +/-0.77%   (-0.09%)
4 groups     932.922 +/-0.54%      941.817 +/*1.10%   (+0.95%)
16 groups   1932.346 +/-1.37%     1962.944 +/-0.62%   (+1.58%)
32 groups   2251.079 +/-7.93%     2418.531 +/-0.69%   (+7.44%)
64 groups   2104.114 +/-9.67%     2348.698 +/-11.24% (+11.62%)
128 groups  2093.756 +/-7.26%     2278.156 +/-9.74%   (+8.81%)
256 groups  1216.736 +/-2.46%     1665.774 +/-4.68%  (+36.91%)

tip/sched/core sha1:
  a1dc0446d649 ('sched/core: Silence a warning in sched_init()')

Changes since v1:
- fixed some bugs
- Used switch case
- Renamed env->src_grp_type to env->balance_type
- split patches in smaller ones
- added comments

Vincent Guittot (8):
  sched/fair: clean up asym packing
  sched/fair: rename sum_nr_running to sum_h_nr_running
  sched/fair: remove meaningless imbalance calculation
  sched/fair: rework load_balance
  sched/fair: use rq->nr_running when balancing load
  sched/fair: use load instead of runnable load
  sched/fair: evenly spread tasks when not overloaded
  sched/fair: use utilization to select misfit task

 kernel/sched/fair.c  | 769 ++++++++++++++++++++++++++++-----------------------
 kernel/sched/sched.h |   2 +-
 2 files changed, 419 insertions(+), 352 deletions(-)

-- 
2.7.4

Comments

Phil Auld Aug. 29, 2019, 7:23 p.m. UTC | #1
On Thu, Aug 01, 2019 at 04:40:16PM +0200 Vincent Guittot wrote:
> Several wrong task placement have been raised with the current load

> balance algorithm but their fixes are not always straight forward and

> end up with using biased values to force migrations. A cleanup and rework

> of the load balance will help to handle such UCs and enable to fine grain

> the behavior of the scheduler for other cases.

> 

> Patch 1 has already been sent separatly and only consolidate asym policy

> in one place and help the review of the changes in load_balance.

> 

> Patch 2 renames the sum of h_nr_running in stats.

> 

> Patch 3 removes meaningless imbalance computation to make review of

> patch 4 easier.

> 

> Patch 4 reworks load_balance algorithm and fixes some wrong task placement

> but try to stay conservative.

> 

> Patch 5 add the sum of nr_running to monitor non cfs tasks and take that

> into account when pulling tasks.

> 

> Patch 6 replaces runnable_load by load now that the metrics is only used

> when overloaded.

> 

> Patch 7 improves the spread of tasks at the 1st scheduling level.

> 

> Patch 8 uses utilization instead of load in all steps of misfit task

> path.

> 

> Some benchmarks results based on 8 iterations of each tests:

> - small arm64 dual quad cores system

> 

>            tip/sched/core        w/ this patchset    improvement

> schedpipe      54981 +/-0.36%        55459 +/-0.31%   (+0.97%)

> 

> hackbench

> 1 groups       0.906 +/-2.34%        0.906 +/-2.88%   (+0.06%)

> 

> - large arm64 2 nodes / 224 cores system

> 

>            tip/sched/core        w/ this patchset    improvement

> schedpipe     125665 +/-0.61%       125455 +/-0.62%   (-0.17%)

> 

> hackbench -l (256000/#grp) -g #grp

> 1 groups      15.263 +/-3.53%       13.776 +/-3.30%   (+9.74%)

> 4 groups       5.852 +/-0.57%        5.340 +/-8.03%   (+8.75%)

> 16 groups      3.097 +/-1.08%        3.246 +/-0.97%   (-4.81%)

> 32 groups      2.882 +/-1.04%        2.845 +/-1.02%   (+1.29%)

> 64 groups      2.809 +/-1.30%        2.712 +/-1.17%   (+3.45%)

> 128 groups     3.129 +/-9.74%        2.813 +/-6.22%   (+9.11%)

> 256 groups     3.559 +/-11.07%       3.020 +/-1.75%  (+15.15%)

> 

> dbench

> 1 groups     330.897 +/-0.27%      330.612 +/-0.77%   (-0.09%)

> 4 groups     932.922 +/-0.54%      941.817 +/*1.10%   (+0.95%)

> 16 groups   1932.346 +/-1.37%     1962.944 +/-0.62%   (+1.58%)

> 32 groups   2251.079 +/-7.93%     2418.531 +/-0.69%   (+7.44%)

> 64 groups   2104.114 +/-9.67%     2348.698 +/-11.24% (+11.62%)

> 128 groups  2093.756 +/-7.26%     2278.156 +/-9.74%   (+8.81%)

> 256 groups  1216.736 +/-2.46%     1665.774 +/-4.68%  (+36.91%)

> 

> tip/sched/core sha1:

>   a1dc0446d649 ('sched/core: Silence a warning in sched_init()')

> 

> Changes since v1:

> - fixed some bugs

> - Used switch case

> - Renamed env->src_grp_type to env->balance_type

> - split patches in smaller ones

> - added comments

> 

> Vincent Guittot (8):

>   sched/fair: clean up asym packing

>   sched/fair: rename sum_nr_running to sum_h_nr_running

>   sched/fair: remove meaningless imbalance calculation

>   sched/fair: rework load_balance

>   sched/fair: use rq->nr_running when balancing load

>   sched/fair: use load instead of runnable load

>   sched/fair: evenly spread tasks when not overloaded

>   sched/fair: use utilization to select misfit task

> 

>  kernel/sched/fair.c  | 769 ++++++++++++++++++++++++++++-----------------------

>  kernel/sched/sched.h |   2 +-

>  2 files changed, 419 insertions(+), 352 deletions(-)

> 

> -- 

> 2.7.4

> 


I keep expecting a v3 so I have not dug into all the patches in detail. However, I've 
been working with them from Vincent's tree while they were under development so I thought 
I'd add some results.

The workload is a test our perf team came up with to illustrate the issues we were seeing
with imbalance in the presence of group scheduling. 

On a 4-numa X 20 cpu system (smt on) we run a 76 thread lu.C benchmark from the NAS Parallel 
suite. And at the same time run 2 stress cpu burn processes.  The GROUP test puts the 
benchmark and the stress processes each in its own cgroup.  The NORMAL case puts them all 
in the first cgroup.  The results show first the average number of threads of each type 
running on each of the numa nodes based on sampling taken during the run.  This is followed
by the lu.C benchmark results. There are 3 runs of GROUP and 2 runs of NORMAL shown.

Before (linux-5.3-rc1+  @  a1dc0446d649)

lu.C.x_76_GROUP_1.stress.ps.numa.hist   Average    0.00  1.00  1.00
lu.C.x_76_GROUP_2.stress.ps.numa.hist   Average    0.00  1.00  1.00
lu.C.x_76_GROUP_3.stress.ps.numa.hist   Average    0.00  1.00  1.00
lu.C.x_76_NORMAL_1.stress.ps.numa.hist  Average    1.15  0.23  0.00  0.62
lu.C.x_76_NORMAL_2.stress.ps.numa.hist  Average    1.67  0.00  0.00  0.33

lu.C.x_76_GROUP_1.ps.numa.hist   Average    30.45  6.95  4.52  34.08
lu.C.x_76_GROUP_2.ps.numa.hist   Average    32.33  8.94  9.21  25.52
lu.C.x_76_GROUP_3.ps.numa.hist   Average    30.45  8.91  12.09  24.55
lu.C.x_76_NORMAL_1.ps.numa.hist  Average    18.54  19.23  19.69  18.54
lu.C.x_76_NORMAL_2.ps.numa.hist  Average    17.25  19.83  20.00  18.92

============76_GROUP========Mop/s===================================
min	q1	median	q3	max
2119.92	2418.1	2716.28	3147.82	3579.36
============76_GROUP========time====================================
min	q1	median	q3	max
569.65	660.155	750.66	856.245	961.83
============76_NORMAL========Mop/s===================================
min	q1	median	q3	max
30424.5	31486.4	31486.4	31486.4	32548.4
============76_NORMAL========time====================================
min	q1	median	q3	max
62.65	64.835	64.835	64.835	67.02


After (linux-5.3-rc1+  @  a1dc0446d649 + this v2 series pulled from 
Vincent's git on ~8/15)

lu.C.x_76_GROUP_1.stress.ps.numa.hist   Average    0.36  1.00  0.64
lu.C.x_76_GROUP_2.stress.ps.numa.hist   Average    1.00  1.00
lu.C.x_76_GROUP_3.stress.ps.numa.hist   Average    1.00  1.00
lu.C.x_76_NORMAL_1.stress.ps.numa.hist  Average    0.23  0.15  0.31  1.31
lu.C.x_76_NORMAL_2.stress.ps.numa.hist  Average    1.00  0.00  0.00  1.00

lu.C.x_76_GROUP_1.ps.numa.hist   Average    18.91  18.36  18.91  19.82
lu.C.x_76_GROUP_2.ps.numa.hist   Average    18.36  18.00  19.91  19.73
lu.C.x_76_GROUP_3.ps.numa.hist   Average    18.17  18.42  19.25  20.17
lu.C.x_76_NORMAL_1.ps.numa.hist  Average    19.08  20.00  18.62  18.31
lu.C.x_76_NORMAL_2.ps.numa.hist  Average    18.09  19.91  19.18  18.82

============76_GROUP========Mop/s===================================
min	q1	median	q3	max
32304.1	33176	34047.9	34166.8	34285.7
============76_GROUP========time====================================
min	q1	median	q3	max
59.47	59.68	59.89	61.505	63.12
============76_NORMAL========Mop/s===================================
min	q1	median	q3	max
29825.5	32454	32454	32454	35082.5
============76_NORMAL========time====================================
min	q1	median	q3	max
58.12	63.24	63.24	63.24	68.36


I had initially tracked this down to two issues. The first was picking the wrong
group in find_busiest_group due to using the average load. The second was in 
fix_small_imbalance(). The "load" of the lu.C tasks was so low it often failed 
to move anything even when it did find a group that was overloaded (nr_running 
> width). I have two small patches which fix this but since Vincent was embarking

on a re-work which also addressed this I dropped them.

We've also run a series of performance tests we use to check for regressions and 
did not find any bad results on our workloads and systems.

So...

Tested-by: Phil Auld <pauld@redhat.com>



Cheers,
Phil
--
Vincent Guittot Aug. 30, 2019, 6:46 a.m. UTC | #2
Hi Phil,

On Thu, 29 Aug 2019 at 21:23, Phil Auld <pauld@redhat.com> wrote:
>

> On Thu, Aug 01, 2019 at 04:40:16PM +0200 Vincent Guittot wrote:

> > Several wrong task placement have been raised with the current load


> >

> > --

> > 2.7.4

> >

>

> I keep expecting a v3 so I have not dug into all the patches in detail. However, I've


v3 is under preparation

> been working with them from Vincent's tree while they were under development so I thought

> I'd add some results.


Yes. thanks for your help.

>

> The workload is a test our perf team came up with to illustrate the issues we were seeing

> with imbalance in the presence of group scheduling.

>

> On a 4-numa X 20 cpu system (smt on) we run a 76 thread lu.C benchmark from the NAS Parallel

> suite. And at the same time run 2 stress cpu burn processes.  The GROUP test puts the

> benchmark and the stress processes each in its own cgroup.  The NORMAL case puts them all

> in the first cgroup.  The results show first the average number of threads of each type

> running on each of the numa nodes based on sampling taken during the run.  This is followed

> by the lu.C benchmark results. There are 3 runs of GROUP and 2 runs of NORMAL shown.

>

> Before (linux-5.3-rc1+  @  a1dc0446d649)

>

> lu.C.x_76_GROUP_1.stress.ps.numa.hist   Average    0.00  1.00  1.00

> lu.C.x_76_GROUP_2.stress.ps.numa.hist   Average    0.00  1.00  1.00

> lu.C.x_76_GROUP_3.stress.ps.numa.hist   Average    0.00  1.00  1.00

> lu.C.x_76_NORMAL_1.stress.ps.numa.hist  Average    1.15  0.23  0.00  0.62

> lu.C.x_76_NORMAL_2.stress.ps.numa.hist  Average    1.67  0.00  0.00  0.33

>

> lu.C.x_76_GROUP_1.ps.numa.hist   Average    30.45  6.95  4.52  34.08

> lu.C.x_76_GROUP_2.ps.numa.hist   Average    32.33  8.94  9.21  25.52

> lu.C.x_76_GROUP_3.ps.numa.hist   Average    30.45  8.91  12.09  24.55

> lu.C.x_76_NORMAL_1.ps.numa.hist  Average    18.54  19.23  19.69  18.54

> lu.C.x_76_NORMAL_2.ps.numa.hist  Average    17.25  19.83  20.00  18.92

>

> ============76_GROUP========Mop/s===================================

> min     q1      median  q3      max

> 2119.92 2418.1  2716.28 3147.82 3579.36

> ============76_GROUP========time====================================

> min     q1      median  q3      max

> 569.65  660.155 750.66  856.245 961.83

> ============76_NORMAL========Mop/s===================================

> min     q1      median  q3      max

> 30424.5 31486.4 31486.4 31486.4 32548.4

> ============76_NORMAL========time====================================

> min     q1      median  q3      max

> 62.65   64.835  64.835  64.835  67.02

>

>

> After (linux-5.3-rc1+  @  a1dc0446d649 + this v2 series pulled from

> Vincent's git on ~8/15)

>

> lu.C.x_76_GROUP_1.stress.ps.numa.hist   Average    0.36  1.00  0.64

> lu.C.x_76_GROUP_2.stress.ps.numa.hist   Average    1.00  1.00

> lu.C.x_76_GROUP_3.stress.ps.numa.hist   Average    1.00  1.00

> lu.C.x_76_NORMAL_1.stress.ps.numa.hist  Average    0.23  0.15  0.31  1.31

> lu.C.x_76_NORMAL_2.stress.ps.numa.hist  Average    1.00  0.00  0.00  1.00

>

> lu.C.x_76_GROUP_1.ps.numa.hist   Average    18.91  18.36  18.91  19.82

> lu.C.x_76_GROUP_2.ps.numa.hist   Average    18.36  18.00  19.91  19.73

> lu.C.x_76_GROUP_3.ps.numa.hist   Average    18.17  18.42  19.25  20.17

> lu.C.x_76_NORMAL_1.ps.numa.hist  Average    19.08  20.00  18.62  18.31

> lu.C.x_76_NORMAL_2.ps.numa.hist  Average    18.09  19.91  19.18  18.82

>

> ============76_GROUP========Mop/s===================================

> min     q1      median  q3      max

> 32304.1 33176   34047.9 34166.8 34285.7

> ============76_GROUP========time====================================

> min     q1      median  q3      max

> 59.47   59.68   59.89   61.505  63.12

> ============76_NORMAL========Mop/s===================================

> min     q1      median  q3      max

> 29825.5 32454   32454   32454   35082.5

> ============76_NORMAL========time====================================

> min     q1      median  q3      max

> 58.12   63.24   63.24   63.24   68.36

>

>

> I had initially tracked this down to two issues. The first was picking the wrong

> group in find_busiest_group due to using the average load. The second was in

> fix_small_imbalance(). The "load" of the lu.C tasks was so low it often failed

> to move anything even when it did find a group that was overloaded (nr_running

> > width). I have two small patches which fix this but since Vincent was embarking

> on a re-work which also addressed this I dropped them.

>

> We've also run a series of performance tests we use to check for regressions and

> did not find any bad results on our workloads and systems.

>

> So...

>

> Tested-by: Phil Auld <pauld@redhat.com>


Thanks for testing

Vincent

>

>

> Cheers,

> Phil

> --
Vincent Guittot Sept. 2, 2019, 1:07 p.m. UTC | #3
Hi Hillf,

Sorry for the late reply.
I have noticed that i didn't answer your question while preparing v3

On Fri, 9 Aug 2019 at 07:21, Hillf Danton <hdanton@sina.com> wrote:
>

>

> On Thu,  1 Aug 2019 16:40:21 +0200 Vincent Guittot wrote:

> >

> > cfs load_balance only takes care of CFS tasks whereas CPUs can be used by

> > other scheduling class. Typically, a CFS task preempted by a RT or deadline

> > task will not get a chance to be pulled on another CPU because the

> > load_balance doesn't take into account tasks from classes.

>

> We can add something accordingly in RT to push cfs tasks to another cpu

> in this direction if the pulling above makes some sense missed long.


RT class doesn't and can't touch CFS tasks but the ilb will be kicked
to check if another CPU can pull the CFS task.

> I doubt we can as we can not do too much about RT tasks on any cpu.

> Nor is busiest cpu selected for load balancing based on a fifo cpuhog.


This patch takes into account all type tasks when checking the state
of a group and when trying to balance the number of tasks but of
course we can only detach and move the cfs tasks at the end.

So if we have 1 RT task and 1 CFS task competing for the same CPU but
there is an idle CPU, the CFS task will be pulled during the
load_balance of the idle CPU whereas it was not the case before.

>

> > Add sum of nr_running in the statistics and use it to detect such

> > situation.

> >

> > Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>

> > ---

> >  kernel/sched/fair.c | 11 +++++++----

> >  1 file changed, 7 insertions(+), 4 deletions(-)

> >

> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c

> > index a8681c3..f05f1ad 100644

> > --- a/kernel/sched/fair.c

> > +++ b/kernel/sched/fair.c

> > @@ -7774,6 +7774,7 @@ struct sg_lb_stats {

> >       unsigned long group_load; /* Total load over the CPUs of the group */

> >       unsigned long group_capacity;

> >       unsigned long group_util; /* Total utilization of the group */

> > +     unsigned int sum_nr_running; /* Nr tasks running in the group */

> >       unsigned int sum_h_nr_running; /* Nr tasks running in the group */

>

> A different comment is appreciated.


ok

>

> >       unsigned int idle_cpus;

> >       unsigned int group_weight;

> > @@ -8007,7 +8008,7 @@ static inline int sg_imbalanced(struct sched_group *group)

> >  static inline bool

> >  group_has_capacity(struct lb_env *env, struct sg_lb_stats *sgs)

> >  {

> > -     if (sgs->sum_h_nr_running < sgs->group_weight)

> > +     if (sgs->sum_nr_running < sgs->group_weight)

> >               return true;

> >

> >       if ((sgs->group_capacity * 100) >

> > @@ -8028,7 +8029,7 @@ group_has_capacity(struct lb_env *env, struct sg_lb_stats *sgs)

> >  static inline bool

> >  group_is_overloaded(struct lb_env *env, struct sg_lb_stats *sgs)

> >  {

> > -     if (sgs->sum_h_nr_running <= sgs->group_weight)

> > +     if (sgs->sum_nr_running <= sgs->group_weight)

> >               return false;

> >

> >       if ((sgs->group_capacity * 100) <

> > @@ -8132,6 +8133,8 @@ static inline void update_sg_lb_stats(struct lb_env *env,

> >               sgs->sum_h_nr_running += rq->cfs.h_nr_running;

> >

> >               nr_running = rq->nr_running;

> > +             sgs->sum_nr_running += nr_running;

> > +

> >               if (nr_running > 1)

> >                       *sg_status |= SG_OVERLOAD;

> >

> > @@ -8480,7 +8483,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s

> >                        * groups.

> >                        */

> >                       env->balance_type = migrate_task;

> > -                     env->imbalance = (busiest->sum_h_nr_running - local->sum_h_nr_running) >> 1;

> > +                     env->imbalance = (busiest->sum_nr_running - local->sum_nr_running) >> 1;

>

> Can we detach RR tasks?

>

> >                       return;

> >               }

> >

> > @@ -8643,7 +8646,7 @@ static struct sched_group *find_busiest_group(struct lb_env *env)

> >

> >       /* Try to move all excess tasks to child's sibling domain */

> >       if (sds.prefer_sibling && local->group_type == group_has_spare &&

> > -         busiest->sum_h_nr_running > local->sum_h_nr_running + 1)

> > +         busiest->sum_nr_running > local->sum_nr_running + 1)

> >               goto force_balance;

> >

> >       if (busiest->group_type != group_overloaded &&

> > --

> > 2.7.4

>