[PATCH 09/19] sched/fair: Count tasks prefering each LLC in a sched group

From: Tim Chen

Date: Sat Oct 11 2025 - 14:20:09 EST


During LLC load balancing, tabulate the number of tasks on each runqueue
that prefer a given destination LLC in a sched group.

For example, consider a system with 4 LLC sched groups (LLC0 to LLC3)
balancing towards LLC3. LLC0 has 3 tasks preferring LLC3, LLC1 has
2, and LLC2 has 1. LLC0, having the most tasks preferring LLC3, is
selected as the busiest source to pick tasks from.

Within a source LLC, the total number of tasks preferring a destination
LLC is computed by summing counts across all CPUs in that runqueue. For
instance, if LLC0 has CPU0 with 2 tasks and CPU1 with 1 task preferring
LLC3, the total for LLC0 is 3.

These statistics allow the load balancer to choose tasks from source
sched groups that best match their preferred LLCs.

Signed-off-by: Tim Chen <tim.c.chen@xxxxxxxxxxxxxxx>
---
kernel/sched/fair.c | 11 +++++++++++
1 file changed, 11 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index b7a68fe7601b..cbd1e97bca4b 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -10399,6 +10399,9 @@ struct sg_lb_stats {
unsigned int nr_numa_running;
unsigned int nr_preferred_running;
#endif
+#ifdef CONFIG_SCHED_CACHE
+ unsigned int nr_pref_llc[NR_LLCS];
+#endif
};

/*
@@ -10891,6 +10894,14 @@ static inline void update_sg_lb_stats(struct lb_env *env,
if (cpu_overutilized(i))
*sg_overutilized = 1;

+#ifdef CONFIG_SCHED_CACHE
+ if (sched_cache_enabled()) {
+ int j;
+
+ for (j = 0; j < max_llcs; ++j)
+ sgs->nr_pref_llc[j] += rq->nr_pref_llc[j];
+ }
+#endif
/*
* No need to call idle_cpu() if nr_running is not 0
*/
--
2.32.0