Re: [PATCH 4/7 v3] sched: propagate load during synchronous attach/detach

From: Dietmar Eggemann
Date: Thu Sep 15 2016 - 13:20:37 EST


On 15/09/16 15:31, Vincent Guittot wrote:
> On 15 September 2016 at 15:11, Dietmar Eggemann
> <dietmar.eggemann@xxxxxxx> wrote:

[...]

>> Wasn't 'consuming <1' related to 'NICE_0_LOAD' and not
>> scale_load_down(gcfs_rq->tg->shares) before the rewrite of PELT (v4.2,
>> __update_group_entity_contrib())?
>
> Yes before the rewrite, the condition (tg->runnable_avg < NICE_0_LOAD) was used.
>
> I have used the following examples to choose the condition:
>
> A task group with only one always running task TA with a weight equals
> to tg->shares, will have a tg's load (cfs_rq->tg->load_avg) equals to
> TA's weight == scale_load_down(tg->shares): The load of the CPU on
> which the task runs, will be scale_load_down(task's weight) ==
> scale_load_down(tg->shares) and the load of others CPUs will be null.
> In this case, all shares will be given to cfs_rq CFS1 on which TA runs
> and the load of the sched_entity SB that represents CFS1 at parent
> level will be scale_load_down(SB's weight) =
> scale_load_down(tg->shares).
>
> If the TA is not an always running task, its load will be less than
> its weight and less than scale_load_down(tg->shares) and as a result
> tg->load_avg will be less than scale_load_down(tg->shares).
> Nevertheless, the weight of SB is still scale_load_down(tg->shares)
> and its load should be the same as TA. But the 1st part of the
> calculation gives a load of scale_load_down(gcfs_rq->tg->shares)
> because tg_load == gcfs_rq->tg_load_avg_contrib == load. So if tg_load
> < scale_load_down(gcfs_rq->tg->shares), we have to correct the load
> that we set to SEB

Makes sense to me now. Thanks. Peter already pointed out that this math
can be made easier, so you will probably 'scale gcfs_rq's load into tg's
shares' only if 'tg_load >= shares''

[...]