[PATCH V6 RESEND] sched/fair: Remove group imbalance from calculate_imbalance()

From: Jeffrey Hugo
Date: Thu Oct 05 2017 - 17:08:49 EST


The group_imbalance path in calculate_imbalance() made sense when it was
added back in 2007 with commit 908a7c1b9b80 ("sched: fix improper load
balance across sched domain") because busiest->load_per_task factored into
the amount of imbalance that was calculated. Beginning with commit
dd5feea14a7d ("sched: Fix SCHED_MC regression caused by change in sched
cpu_power"), busiest->load_per_task is not a factor in the imbalance
calculation, thus the group_imbalance path no longer makes sense.

The group_imbalance path can only affect the outcome of
calculate_imbalance() when the average load of the domain is less than the
original busiest->load_per_task. In this case, busiest->load_per_task is
overwritten with the scheduling domain load average. Thus
busiest->load_per_task no longer represents actual load that can be moved.

At the final comparison between env->imbalance and busiest->load_per_task,
imbalance may be larger than the new busiest->load_per_task causing the
check to fail under the assumption that there is a task that could be
migrated to satisfy the imbalance. However env->imbalance may still be
smaller than the original busiest->load_per_task, thus it is unlikely that
there is a task that can be migrated to satisfy the imbalance.
Calculate_imbalance() would not choose to run fix_small_imbalance() when we
expect it should. In the worst case, this can result in idle cpus.

Since the group imbalance path in calculate_imbalance() is at best a NOP
but otherwise harmful, remove it.

Co-authored-by: Austin Christ <austinwc@xxxxxxxxxxxxxx>
Signed-off-by: Jeffrey Hugo <jhugo@xxxxxxxxxxxxxx>
Tested-by: Tyler Baicar <tbaicar@xxxxxxxxxxxxxx>
Reviewed-by: Dietmar Eggemann <dietmar.eggemann@xxxxxxx>
---

Peter, we were hoping you'd take this fix. The discussion last time around
didn't seem ot have a specific conclusion. Please lay out how we can move
forward on this. Thanks.

[v6]
-Added additional history clarification to commit text

kernel/sched/fair.c | 9 ---------
1 file changed, 9 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 0107280..e92a0bf 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8067,15 +8067,6 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
local = &sds->local_stat;
busiest = &sds->busiest_stat;

- if (busiest->group_type == group_imbalanced) {
- /*
- * In the group_imb case we cannot rely on group-wide averages
- * to ensure cpu-load equilibrium, look at wider averages. XXX
- */
- busiest->load_per_task =
- min(busiest->load_per_task, sds->avg_load);
- }
-
/*
* Avg load of busiest sg can be less and avg load of local sg can
* be greater than avg load across all sgs of sd because avg load
--
Qualcomm Datacenter Technologies as an affiliate of Qualcomm Technologies, Inc.
Qualcomm Technologies, Inc. is a member of the
Code Aurora Forum, a Linux Foundation Collaborative Project.