Re: Bug in scheduler when using rt_mutex

From: Yong Zhang
Date: Fri Jan 21 2011 - 07:24:36 EST


On Fri, Jan 21, 2011 at 12:08:56PM +0100, Peter Zijlstra wrote:
> > That's ok, we don't and aren't supposed to care what happens while he's
> > gone. But we do have to make sure that vruntime is sane either when he
> > leaves, or when he comes back. Seems to me the easiest is clip when he
> > leaves to cover him having slept a long time before leaving, then coming
> > back on us as a runner. If he comes back as a sleeper, he'll be clipped
> > again anyway, so all is well.
> >
> > sched_fork() should probably zero child's vruntime too, so non-fair
> > children can't enter fair_class with some bogus lag they never had.
>
> Something like so?
>
> Index: linux-2.6/kernel/sched.c
> ===================================================================
> --- linux-2.6.orig/kernel/sched.c
> +++ linux-2.6/kernel/sched.c
> @@ -2624,6 +2624,8 @@ void sched_fork(struct task_struct *p, i
>
> if (!rt_prio(p->prio))
> p->sched_class = &fair_sched_class;
> + else
> + p->se.vruntime = 0;

This can be moved to __sched_fork()

>
> if (p->sched_class->task_fork)
> p->sched_class->task_fork(p);
> Index: linux-2.6/kernel/sched_fair.c
> ===================================================================
> --- linux-2.6.orig/kernel/sched_fair.c
> +++ linux-2.6/kernel/sched_fair.c
> @@ -4086,8 +4086,14 @@ static void switched_from_fair(struct rq
> * have normalized the vruntime, if it was !on_rq, then only when
> * the task is sleeping will it still have non-normalized vruntime.
> */
> - if (!se->on_rq && p->state != TASK_RUNNING)
> + if (!se->on_rq && p->state != TASK_RUNNING) {
> + /*
> + * Fix up our vruntime so that the current sleep doesn't
> + * cause 'unlimited' sleep bonus.
> + */
> + place_entity(cfs_rq, se, 0);
> se->vruntime -= cfs_rq->min_vruntime;

Now I will say yes.
Though it's same to my suggestion which was rejected by myself :)

Thanks,
Yong
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/