[PATCH v3 0/8] sched/fair: rework the CFS load balance

From: Vincent Guittot
Date: Thu Sep 19 2019 - 03:34:00 EST


Several wrong task placement have been raised with the current load
balance algorithm but their fixes are not always straight forward and
end up with using biased values to force migrations. A cleanup and rework
of the load balance will help to handle such UCs and enable to fine grain
the behavior of the scheduler for other cases.

Patch 1 has already been sent separately and only consolidate asym policy
in one place and help the review of the changes in load_balance.

Patch 2 renames the sum of h_nr_running in stats.

Patch 3 removes meaningless imbalance computation to make review of
patch 4 easier.

Patch 4 reworks load_balance algorithm and fixes some wrong task placement
but try to stay conservative.

Patch 5 add the sum of nr_running to monitor non cfs tasks and take that
into account when pulling tasks.

Patch 6 replaces runnable_load by load now that the signal is only used
when overloaded.

Patch 7 improves the spread of tasks at the 1st scheduling level.

Patch 8 uses utilization instead of load in all steps of misfit task
path.

Patch 9 replaces runnable_load_avg by load_avg in the wake up path.

Patch 10 optimizes find_idlest_group() that was using both runnable_load
and load. This has not been squashed with previous patch to ease the
review.

Some benchmarks results based on 8 iterations of each tests:
- small arm64 dual quad cores system

tip/sched/core w/ this patchset improvement
schedpipe 54981 +/-0.36% 55459 +/-0.31% (+0.97%)

hackbench
1 groups 0.906 +/-2.34% 0.906 +/-2.88% (+0.06%)

- large arm64 2 nodes / 224 cores system

tip/sched/core w/ this patchset improvement
schedpipe 125323 +/-0.98% 125624 +/-0.71% (+0.24%)

hackbench -l (256000/#grp) -g #grp
1 groups 15.360 +/-1.76% 14.206 +/-1.40% (+8.69%)
4 groups 5.822 +/-1.02% 5.508 +/-6.45% (+5.38%)
16 groups 3.103 +/-0.80% 3.244 +/-0.77% (-4.52%)
32 groups 2.892 +/-1.23% 2.850 +/-1.81% (+1.47%)
64 groups 2.825 +/-1.51% 2.725 +/-1.51% (+3.54%)
128 groups 3.149 +/-8.46% 3.053 +/-13.15% (+3.06%)
256 groups 3.511 +/-8.49% 3.019 +/-1.71% (+14.03%)

dbench
1 groups 329.677 +/-0.46% 329.771 +/-0.11% (+0.03%)
4 groups 931.499 +/-0.79% 947.118 +/-0.94% (+1.68%)
16 groups 1924.210 +/-0.89% 1947.849 +/-0.76% (+1.23%)
32 groups 2350.646 +/-5.75% 2351.549 +/-6.33% (+0.04%)
64 groups 2201.524 +/-3.35% 2192.749 +/-5.84% (-0.40%)
128 groups 2206.858 +/-2.50% 2376.265 +/-7.44% (+7.68%)
256 groups 1263.520 +/-3.34% 1633.143 +/-13.02% (+29.25%)

tip/sched/core sha1:
0413d7f33e60 ('sched/uclamp: Always use 'enum uclamp_id' for clamp_id values')

Changes since v2:
- fix typo and reorder code
- some minor code fixes
- optimize the find_idles_group()

Not covered in this patchset:
- update find_idlest_group() to be more aligned with load_balance(). I didn't
want to delay this version because of this update which is not ready yet
- Better detection of overloaded and fully busy state, especially for cases
when nr_running > nr CPUs.

Vincent Guittot (8):
sched/fair: clean up asym packing
sched/fair: rename sum_nr_running to sum_h_nr_running
sched/fair: remove meaningless imbalance calculation
sched/fair: rework load_balance
sched/fair: use rq->nr_running when balancing load
sched/fair: use load instead of runnable load in load_balance
sched/fair: evenly spread tasks when not overloaded
sched/fair: use utilization to select misfit task
sched/fair: use load instead of runnable load in wakeup path
sched/fair: optimize find_idlest_group

kernel/sched/fair.c | 805 +++++++++++++++++++++++++++-------------------------
1 file changed, 417 insertions(+), 388 deletions(-)

--
2.7.4