[PATCH 2/4] sched/fair: Do not disqualify either runqueues of SMT sched groups

From: Ricardo Neri
Date: Thu Aug 25 2022 - 18:49:55 EST


We may be here because the busiest group is composed of SMT siblings and
more than one is busy.

An idle CPU with lower priority can help the higher-priority busiest
scheduling group by pulling tasks from it. The tasks that remain in the
busiest group will run with higher performance.

This scenario is observed, for instance, on Intel hybrid processors. PCores
have two SMT siblings and have higher priority than the ECores, which do
not have SMT siblings.

Cc: Ben Segall <bsegall@xxxxxxxxxx>
Cc: Daniel Bristot de Oliveira <bristot@xxxxxxxxxx>
Cc: Dietmar Eggemann <dietmar.eggemann@xxxxxxx>
Cc: Len Brown <len.brown@xxxxxxxxx>
Cc: Mel Gorman <mgorman@xxxxxxx>
Cc: Rafael J. Wysocki <rafael.j.wysocki@xxxxxxxxx>
Cc: Srinivas Pandruvada <srinivas.pandruvada@xxxxxxxxxxxxxxx>
Cc: Steven Rostedt <rostedt@xxxxxxxxxxx>
Cc: Tim C. Chen <tim.c.chen@xxxxxxxxx>
Cc: Valentin Schneider <vschneid@xxxxxxxxxx>
Cc: x86@xxxxxxxxxx
Cc: linux-kernel@xxxxxxxxxxxxxxx
Reviewed-by: Len Brown <len.brown@xxxxxxxxx>
Signed-off-by: Ricardo Neri <ricardo.neri-calderon@xxxxxxxxxxxxxxx>>
---
kernel/sched/fair.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 91f271ea02d2..810645eb58ed 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9662,10 +9662,14 @@ static struct rq *find_busiest_queue(struct lb_env *env,
nr_running == 1)
continue;

- /* Make sure we only pull tasks from a CPU of lower priority */
+ /*
+ * Make sure we only pull tasks from a CPU of lower priority.
+ * Except for scheduling groups composed of SMT siblings.
+ */
if ((env->sd->flags & SD_ASYM_PACKING) &&
sched_asym_prefer(i, env->dst_cpu) &&
- nr_running == 1)
+ nr_running == 1 &&
+ !(group->flags & SD_SHARE_CPUCAPACITY))
continue;

switch (env->migration_type) {
--
2.25.1