Re: [patch] sched: fix b5d9d734 blunder in task_new_fair()

From: Peter Zijlstra
Date: Fri Nov 27 2009 - 07:38:55 EST


On Fri, 2009-11-27 at 13:21 +0100, Peter Zijlstra wrote:
> +static struct rq *
> +balance_task(struct task_struct *p, int sd_flags, int wake_flags)
> +{
> + struct rq *rq, *old_rq;
> + u64 vdelta;
> +
> + rq = old_rq = task_rq(p);
> +
> + if (p->sched_class == &fair_sched_class)
> + vdelta = task_cfs_rq(p)->min_vruntime;
> +
> + __task_rq_unlock(old_rq);
> +
> + cpu = select_task_rq(p, sd_flags, wake_flags);
> +
> + rq = cpu_rq(cpu);
> + spin_lock(&rq->lock);
> + if (rq == old_rq)
> + return rq;
> +
> + update_rq_clock(rq);
> +
> + set_task_cpu_all(p, task_cpu(p), cpu);
> +
> + if (p->sched_class == &fair_sched_class) {
> + vdelta -= task_cfs_rq(p)->min_vruntime;
> + p->se.vruntime -= vdelta;
> }
>
> + return rq;
> +}

Feh, there's a much easier way to deal with that min_vruntime crap.

Do se->vruntime -= cfs_rq->min_vruntime, on dequeue, and
se->vruntime += cfs_rq->min_vruntime, on enqueue

That leaves the whole thing invariant to the cfs_rq when its not
enqueued, and we don't have to fix it up when moving it around.

Also, note that I ripped out the clock_offset thingy, because with the
current sched_clock.c stuff clocks should get synchronized when we do a
remote clock update (or at least appear monotonic).

Getting these two things sorted returns set_task_cpu() to sanity.

/me tosses patch and starts over.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/