[PATCH 05/12] sched: load tracking bug fix

From: Alex Shi
Date: Mon Dec 03 2012 - 04:30:39 EST


We need initialize the se.avg.{decay_count, load_avg_contrib} to zero
after a new task forked.
Otherwise random values of above variable give a incorrect statistic
data when do new task enqueue:
enqueue_task_fair
enqueue_entity
enqueue_entity_load_avg

Signed-off-by: Alex Shi <alex.shi@xxxxxxxxx>
---
kernel/sched/core.c | 2 ++
1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 5dae0d2..e6533e1 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1534,6 +1534,8 @@ static void __sched_fork(struct task_struct *p)
#if defined(CONFIG_SMP) && defined(CONFIG_FAIR_GROUP_SCHED)
p->se.avg.runnable_avg_period = 0;
p->se.avg.runnable_avg_sum = 0;
+ p->se.avg.decay_count = 0;
+ p->se.avg.load_avg_contrib = 0;
#endif
#ifdef CONFIG_SCHEDSTATS
memset(&p->se.statistics, 0, sizeof(p->se.statistics));
--
1.7.5.4


Yes, the runnable load will quick become 100%, after se->on_rq set, and passed
one tick.
If we are forking many tasks in one tick. I didn't find a useful chance to update
load avg for new tasks.
So, guess we need the following patch:

==========