Re: [PATCH] sched: prevent getting too much vruntime

From: Peter Zijlstra
Date: Wed Nov 11 2015 - 06:50:54 EST


On Wed, Nov 11, 2015 at 06:48:49PM +0900, Byungchul Park wrote:
> On Wed, Nov 11, 2015 at 10:26:32AM +0100, Peter Zijlstra wrote:
> > On Wed, Nov 11, 2015 at 05:50:27PM +0900, byungchul.park@xxxxxxx wrote:
> >
> > I've not actually read anything; my brain isn't working right today.
> >
> > > +static inline void vruntime_unnormalize(struct cfs_rq *cfs_rq, struct sched_entity *se)
> > > +{
> > > + se->vruntime += cfs_rq->min_vruntime;
> > > + if (unlikely((s64)se->vruntime < 0))
> > > + se->vruntime = 0;
> > > +}
> >
> > But this is broken. This simply _cannot_ be right.
> >
> > vruntime very much needs to wrap in u64 space. While regular time in ns
> > takes some 584 year to wrap, vruntime is scaled. The fastest vruntime is
> > 2/1024 or 512 times faster than normal time. Making it take just over a
> > year to wrap around. This will happen.
>
> Then, do you mean it's no problem even if we compare between a vruntime
> not wrapped yet and another vruntime already wrapped? I really wonder it.

It should be; we were really careful with this back when we wrote all
that. All vruntime comparisons should be of the form (s64)(a-b). Which
gets you the correct order assuming things haven't drifted more than
2^63 apart.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/