Re: [bug-report] possible s64 overflow in max_vruntime()

From: Zhang Qiao
Date: Wed Jan 11 2023 - 22:02:39 EST




在 2022/12/23 21:57, Zhang Qiao 写道:
>
>
> 在 2022/12/22 20:45, Peter Zijlstra 写道:
>> On Wed, Dec 21, 2022 at 11:19:31PM +0800, Zhang Qiao wrote:
>>> hi folks,
>>>
>>> I found problem about s64 overflow in max_vruntime().
>>>
>>> I create a task group GROUPA (path: /system.slice/xxx/yyy/CGROUPA) and run a task in this
>>> group on each cpu, these tasks is while loop and 100% cpu usage.
>>>
>>> When unregister net devices, will queue a kwork on system_highpri_wq at flush_all_backlogs()
>>> and wake up a high-priority kworker thread on each cpu. However, the kworker thread has been
>>> waiting on the queue and has not been scheduled.
>>>
>>> After parsing the vmcore, the vruntime of the kworker is 0x918fdb05287da7c3 and the
>>> cfs_rq->min_vruntime is 0x124b17fd59db8d02.
>>>
>>> why the difference between the cfs_rq->min_vruntime and kworker's vruntime is so large?
>>> 1) the kworker of the system_highpri_wq sleep for long long time(about 300 days).
>>> 2) cfs_rq->curr is the ancestor of the GROUPA, cfs->curr->load.weight is 2494, so when
>>> the task belonging to the GROUPA run for a long time, its vruntime will increase by 420
>>> times, cfs_rq->min_vruntime will also grow rapidly.
>>> 3) when wakeup kworker thread, kworker will be set the maximum value between kworker's
>>> vruntime and cfs_rq->min_vruntime. But at max_vruntime(), there will be a s64 overflow issue,
>>> as follow:
>>>
>>> ---------
>>>
>>> static inline u64 min_vruntime(u64 min_vruntime, u64 vruntime)
>>> {
>>> /*
>>> * vruntime=0x124b17fd59db8d02
>>> * min_vruntime=0x918fdb05287da7c3
>>> * vruntime - min_vruntime = 9276074894177461567 > s64_max, will s64 overflow
>>> */
>>> s64 delta = (s64)(vruntime - min_vruntime);
>>> if (delta < 0)
>>> min_vruntime = vruntime;
>>>
>>> return min_vruntime;
>>> }
>>>
>>> ----------
>>>
>>> max_vruntime() will return the kworker's old vruntime, it is incorrect and the correct result
>>> shoud be cfs_rq->minvruntime. This incorrect result is greater than cfs_rq->min_vruntime and
>>> will cause kworker thread starved.
>>>
>>> Does anyone have a good suggestion for slove this problem? or bugfix patch.
>>
>> I don't understand what you tihnk the problem is. Signed overflow is
>> perfectly fine and works as designed here.
>
> hi, Peter and Waiman,
>



> This problem occurs in the production environment that deploy some dpdk services. When this probelm
> occurs, the system will be unavailable(for example, many commands about network will be stuck),so
> i think it's a problem.
>
> Because most network commands(such as "ip") require rtnl_mutex, but the rtnl_mutex's onwer is waiting for
> the the kworker of the system_highpri_wq at flush_all_backlogs(), util this highpri kworker finished
> flush the network packets.
>
> However, this highpri kworker has been sleeping for long, the difference between the kworker's vruntime
> and cfs_rq->min_vruntime is so big, when waking up it, it will be set its old vruntime due to s64 overflow
> at max_vruntime(). Because the incorrect vruntime, the kworker might not be scheduled.
>
> Is it necessary to deal with this problem in kernel?
> If necessary, for fix this problem, when a tasks is sleeping long enough, we set its vruntime as
> cfs_rq->min_vruntime when wakeup it, avoid the s64 overflow issue at max_vruntime, as follow:
>

hi,

Gentle Ping. Please let me know if you have any comments on the issue.

Thanks,

Zhang Qiao.

>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index e16e9f0124b0..89df8d7bae66 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -4336,10 +4336,14 @@ static void check_spread(struct cfs_rq *cfs_rq, struct sched_entity *se)
> #endif
> }
>
> +/* when a task sleep over 200 days, it's vruntime will be set as cfs_rq->min_vruntime. */
> +#define WAKEUP_REINIT_THRESHOLD_NS (200LL * 24 * 3600 * NSEC_PER_SEC)
> +
> static void
> place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial)
> {
> u64 vruntime = cfs_rq->min_vruntime;
> + struct rq *rq = rq_of(cfs_rq);
>
> /*
> * The 'current' period is already promised to the current tasks,
> @@ -4364,8 +4368,11 @@ place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial)
> vruntime -= thresh;
> }
>
> - /* ensure we never gain time by being placed backwards. */
> - se->vruntime = max_vruntime(se->vruntime, vruntime);
> + if (unlikely(!initial && (s64)(rq_clock_task(rq) - se->exec_start) > WAKEUP_REINIT_THRESHOLD_NS))
> + se->vruntime = vruntime;
> + else
> + /* ensure we never gain time by being placed backwards. */
> + se->vruntime = max_vruntime(se->vruntime, vruntime);
> }
>
> static void check_enqueue_throttle(struct cfs_rq *cfs_rq);
>
>
>
>>
>> .
>>
>
> .
>