[PATCH] sched/fair: Fix SMT4 group_smt_balance handling

From: Tim Chen
Date: Thu Sep 07 2023 - 14:20:02 EST


For SMT4, any group with more than 2 tasks will be marked as
group_smt_balance. Retain the behaviour of group_has_spare by marking
the busiest group as the group which has the least number of idle_cpus.

Also, handle rounding effect of adding (ncores_local + ncores_busy) when
the local is fully idle and busy group imbalance is less than 2 tasks.
Local group should try to pull at least 1 task in this case so imbalance
should be set to 2 instead.

Fixes: fee1759e4f04 ("sched/fair: Determine active load balance for SMT sch=
ed groups")
Acked-by: Shrikanth Hegde <sshegde@xxxxxxxxxxxxxxxxxx>
Signed-off-by: Tim Chen <tim.c.chen@xxxxxxxxxxxxxxx>
---
kernel/sched/fair.c | 12 +++++++++++-
1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 0b7445cd5af9..fd9e594b5623 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9575,7 +9575,7 @@ static inline long sibling_imbalance(struct lb_env *e=
nv,
imbalance /=3D ncores_local + ncores_busiest;
=20
/* Take advantage of resource in an empty sched group */
- if (imbalance =3D=3D 0 && local->sum_nr_running =3D=3D 0 &&
+ if (imbalance <=3D 1 && local->sum_nr_running =3D=3D 0 &&
busiest->sum_nr_running > 1)
imbalance =3D 2;
=20
@@ -9763,6 +9763,15 @@ static bool update_sd_pick_busiest(struct lb_env *en=
v,
break;
=20
case group_smt_balance:
+ /*
+ * Check if we have spare CPUs on either SMT group to
+ * choose has spare or fully busy handling.
+ */
+ if (sgs->idle_cpus !=3D 0 || busiest->idle_cpus !=3D 0)
+ goto has_spare;
+
+ fallthrough;
+
case group_fully_busy:
/*
* Select the fully busy group with highest avg_load. In
@@ -9802,6 +9811,7 @@ static bool update_sd_pick_busiest(struct lb_env *env=
,
else
return true;
}
+has_spare:
=20
/*
* Select not overloaded group with lowest number of idle cpus
--=20
2.32.0