Re: [RFC PATCH] sched: fix the nonsense shares when load of cfs_rq is too, small

From: çè
Date: Wed Mar 04 2020 - 20:24:00 EST




On 2020/3/4 äå5:43, Vincent Guittot wrote:
> On Wed, 4 Mar 2020 at 09:47, Vincent Guittot <vincent.guittot@xxxxxxxxxx> wrote:
>>
>> On Wed, 4 Mar 2020 at 02:19, çè <yun.wang@xxxxxxxxxxxxxxxxx> wrote:
>>>
>>>
>>>
>>> On 2020/3/4 äå3:52, Peter Zijlstra wrote:
>>> [snip]
>>>>> The reason is because we have group B with shares as 2, which make
>>>>> the group A 'cfs_rq->load.weight' very small.
>>>>>
>>>>> And in calc_group_shares() we calculate shares as:
>>>>>
>>>>> load = max(scale_load_down(cfs_rq->load.weight), cfs_rq->avg.load_avg);
>>>>> shares = (tg_shares * load) / tg_weight;
>>>>>
>>>>> Since the 'cfs_rq->load.weight' is too small, the load become 0
>>>>> in here, although 'tg_shares' is 102400, shares of the se which
>>>>> stand for group A on root cfs_rq become 2.
>>>>
>>>> Argh, because A->cfs_rq.load.weight is B->se.load.weight which is
>>>> B->shares/nr_cpus.
>>>
>>> Yeah, that's exactly why it happens, even the share 2 scale up to 2048,
>>> on 96 CPUs platform, each CPU get only 21 in equal case.
>>>
>>>>
>>>>> While the se of D on root cfs_rq is far more bigger than 2, so it
>>>>> wins the battle.
>>>>>
>>>>> This patch add a check on the zero load and make it as MIN_SHARES
>>>>> to fix the nonsense shares, after applied the group C wins as
>>>>> expected.
>>>>>
>>>>> Signed-off-by: Michael Wang <yun.wang@xxxxxxxxxxxxxxxxx>
>>>>> ---
>>>>> kernel/sched/fair.c | 2 ++
>>>>> 1 file changed, 2 insertions(+)
>>>>>
>>>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>>>>> index 84594f8aeaf8..53d705f75fa4 100644
>>>>> --- a/kernel/sched/fair.c
>>>>> +++ b/kernel/sched/fair.c
>>>>> @@ -3182,6 +3182,8 @@ static long calc_group_shares(struct cfs_rq *cfs_rq)
>>>>> tg_shares = READ_ONCE(tg->shares);
>>>>>
>>>>> load = max(scale_load_down(cfs_rq->load.weight), cfs_rq->avg.load_avg);
>>>>> + if (!load && cfs_rq->load.weight)
>>>>> + load = MIN_SHARES;
>>>>>
>>>>> tg_weight = atomic_long_read(&tg->load_avg);
>>>>
>>>> Yeah, I suppose that'll do. Hurmph, wants a comment though.
>>>>
>>>> But that has me looking at other users of scale_load_down(), and doesn't
>>>> at least update_tg_cfs_load() suffer the same problem?
>>>
>>> Good point :-) I'm not sure but is scale_load_down() supposed to scale small
>>> value into 0? If not, maybe we should fix the helper to make sure it at
>>> least return some real load? like:
>>>
>>> # define scale_load_down(w) ((w + (1 << SCHED_FIXEDPOINT_SHIFT)) >> SCHED_FIXEDPOINT_SHIFT)
>>
>> you will add +1 of nice prio for each device
>
> Of course, it's not prio but only weight which is different

That's right, should only handle the issue cases.

Regards,
Michael Wang

>
>>
>> should we use instead
>> # define scale_load_down(w) ((w >> SCHED_FIXEDPOINT_SHIFT) ? (w >>
>> SCHED_FIXEDPOINT_SHIFT) : MIN_SHARES)
>>
>> Regards,
>> Vincent
>>
>>>
>>> Regards,
>>> Michael Wang
>>>
>>>>