[PATCH] sched: Fix calc_cfs_shares() to consider blocked_load_avg also

From: Namhyung Kim
Date: Thu Feb 28 2013 - 01:26:52 EST


From: Namhyung Kim <namhyung.kim@xxxxxxx>

The calc_tg_weight() and calc_cfs_shares() used cfs_rq->load.weight
but this is no longer valid for per-entity load tracking since
cfs_rq->tg_load_contrib consists of runnable_load_avg and blocked_
load_avg. Simply using load.weight here will lose blocked_load_avg
part so will result in an inaccurate share.

Cc: Paul Turner <pjt@xxxxxxxxxx>
Signed-off-by: Namhyung Kim <namhyung@xxxxxxxxxx>
---
kernel/sched/fair.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 7a33e5986fc5..add7440bd02f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1032,13 +1032,13 @@ static inline long calc_tg_weight(struct task_group *tg, struct cfs_rq *cfs_rq)
long tg_weight;

/*
- * Use this CPU's actual weight instead of the last load_contribution
- * to gain a more accurate current total weight. See
- * update_cfs_rq_load_contribution().
+ * Use this CPU's actual load instead of the last load_contribution
+ * to gain a more accurate current total load. See
+ * __update_cfs_rq_tg_load_contrib().
*/
tg_weight = atomic64_read(&tg->load_avg);
tg_weight -= cfs_rq->tg_load_contrib;
- tg_weight += cfs_rq->load.weight;
+ tg_weight += cfs_rq->runnable_load_avg + cfs_rq->blocked_load_avg;

return tg_weight;
}
@@ -1048,7 +1048,7 @@ static long calc_cfs_shares(struct cfs_rq *cfs_rq, struct task_group *tg)
long tg_weight, load, shares;

tg_weight = calc_tg_weight(tg, cfs_rq);
- load = cfs_rq->load.weight;
+ load = cfs_rq->runnable_load_avg + cfs_rq->blocked_load_avg;

shares = (tg->shares * load);
if (tg_weight)
--
1.7.11.7

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/