[PATCH 1/2] sched/fair: Fix weight overly small for interactive group entity

From: Yuyang Du
Date: Tue Oct 13 2015 - 05:10:04 EST


Commit 9d89c257dfb9c51a532d69 (sched/fair: Rewrite runnable load
and utilization average tracking) led to overly small weight for
interactive group entity. The case can be easily reproduced when
a number of CPU hogs compete for the CPUs at the same time (thanks
to Mike). This is largly because the task group's load average
tracking cross CPUs lags behind the real changes.

We accelerate the group share distribution process by using the
load.weight of the cfs_rq. This may increase the entire group's
share, but we have to do so to protect the (fragile) interactive
tasks from especially CPU hogs.

Reported-by: Mike Galbraith <umgwanakikbuti@xxxxxxxxx>
Signed-off-by: Yuyang Du <yuyang.du@xxxxxxxxx>
---
kernel/sched/fair.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 700eb54..601a253 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -2370,7 +2370,7 @@ static inline long calc_tg_weight(struct task_group *tg, struct cfs_rq *cfs_rq)
*/
tg_weight = atomic_long_read(&tg->load_avg);
tg_weight -= cfs_rq->tg_load_avg_contrib;
- tg_weight += cfs_rq_load_avg(cfs_rq);
+ tg_weight += cfs_rq->load.weight;

return tg_weight;
}
@@ -2380,7 +2380,7 @@ static long calc_cfs_shares(struct cfs_rq *cfs_rq, struct task_group *tg)
long tg_weight, load, shares;

tg_weight = calc_tg_weight(tg, cfs_rq);
- load = cfs_rq_load_avg(cfs_rq);
+ load = cfs_rq->load.weight;

shares = (tg->shares * load);
if (tg_weight)
--
2.1.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/