[patch] sched: fix newly idle load balance in case of SMT

From: Siddha, Suresh B
Date: Mon Jul 16 2007 - 19:57:46 EST


In the presence of SMT, newly idle balance was never happening for multi-core
and SMP domains(even when both the logical siblings are idle).

If thread 0 is already idle and when thread 1 is about to go to idle, newly
idle load balance always think that one of the threads is not idle and skips
doing the newly idle load balance for multi-core and SMP domains.

This is because of the idle_cpu() macro, which checks if the current
process on a cpu is an idle process. But this is not the case for the
thread doing the load_balance_newidle().

Fix this by using runqueue's nr_running field instead of idle_cpu(). And
also skip the logic of 'only one idle cpu in the group will be doing load
balancing' during newly idle case.

Signed-off-by: Suresh Siddha <suresh.b.siddha@xxxxxxxxx>
---

diff --git a/kernel/sched.c b/kernel/sched.c
index 3332bbb..623cee9 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -2226,7 +2226,7 @@ find_busiest_group(struct sched_domain *sd, int this_cpu,

rq = cpu_rq(i);

- if (*sd_idle && !idle_cpu(i))
+ if (*sd_idle && rq->nr_running)
*sd_idle = 0;

/* Bias balancing toward cpus of our domain */
@@ -2248,9 +2248,11 @@ find_busiest_group(struct sched_domain *sd, int this_cpu,
/*
* First idle cpu or the first cpu(busiest) in this sched group
* is eligible for doing load balancing at this and above
- * domains.
+ * domains. In the newly idle case, we will allow all the cpu's
+ * to do the newly idle load balance.
*/
- if (local_group && balance_cpu != this_cpu && balance) {
+ if (idle != CPU_NEWLY_IDLE && local_group &&
+ balance_cpu != this_cpu && balance) {
*balance = 0;
goto ret;
}
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/