[PATCH] sched:make ads.avg_load update in time

From: Alex Shi
Date: Thu Apr 14 2011 - 22:55:35 EST


commit 866ab43efd325fae cause hackbench benchmark process mode
performance dropping about 15% on our x86_64 machines. The patch works
as its origin purpose, but it cause nearly double context switch on
hackbench running. The sds.avg_load was not updated in time cause this.
So when move the sds.avg_load update before group_imb checking,
performance recovered totally.

Signed-off-by: Alex Shi <alex.shi@xxxxxxxxx>
---
kernel/sched_fair.c | 3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
index 7f00772..036b660 100644
--- a/kernel/sched_fair.c
+++ b/kernel/sched_fair.c
@@ -3127,6 +3127,8 @@ find_busiest_group(struct sched_domain *sd, int this_cpu,
if (!sds.busiest || sds.busiest_nr_running == 0)
goto out_balanced;

+ sds->avg_load = (SCHED_LOAD_SCALE * sds->total_load) / sds->total_pwr;
+
/*
* If the busiest group is imbalanced the below checks don't
* work because they assumes all things are equal, which typically
@@ -3151,7 +3153,6 @@ find_busiest_group(struct sched_domain *sd, int this_cpu,
* Don't pull any tasks if this group is already above the domain
* average load.
*/
- sds.avg_load = (SCHED_LOAD_SCALE * sds.total_load) / sds.total_pwr;
if (sds.this_load >= sds.avg_load)
goto out_balanced;

--
1.7.0

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/