[PATCH] sched/fair: fix idle balance when remaining tasks are all non-CFS tasks

From: Wanpeng Li
Date: Thu Nov 20 2014 - 21:37:33 EST


The overload indicator is used for knowing when we can totally avoid load
balancing to a cpu that is about to go idle. We can avoid load balancing
when no cpu has cfs task and both rt and deadline have push/pull mechanism
to do their own balancing.

However, rq->nr_running on behalf of the total number of each class tasks
on the cpu, do idle balance when remaining tasks are all non-CFS tasks does
not make any sense.

This patch fix it by idle balance when there are still other CFS tasks in
the rq's root domain.

Signed-off-by: Wanpeng Li <wanpeng.li@xxxxxxxxxxxxxxx>
---
kernel/sched/fair.c | 2 +-
kernel/sched/sched.h | 3 ++-
2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index df2cdf7..90a74e7 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6197,7 +6197,7 @@ static inline void update_sg_lb_stats(struct lb_env *env,
sgs->group_load += load;
sgs->sum_nr_running += rq->cfs.h_nr_running;

- if (rq->nr_running > 1)
+ if (rq->nr_running > 1 && rq->cfs.h_nr_running > 0)
*overload = true;

#ifdef CONFIG_NUMA_BALANCING
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 9a2a45c..98f2d8f 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1271,7 +1271,8 @@ static inline void add_nr_running(struct rq *rq, unsigned count)

rq->nr_running = prev_nr + count;

- if (prev_nr < 2 && rq->nr_running >= 2) {
+ if (prev_nr < 2 && rq->nr_running >= 2 &&
+ rq->cfs.h_nr_running > 0) {
#ifdef CONFIG_SMP
if (!rq->rd->overload)
rq->rd->overload = true;
--
1.9.1

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/