Re: [PATCH RFC 1/3] sched: introduce distinct per-cpu load average

From: Peter Zijlstra
Date: Thu Oct 04 2012 - 05:00:25 EST


On Thu, 2012-10-04 at 01:05 +0200, Andrea Righi wrote:
> +++ b/kernel/sched/core.c
> @@ -727,15 +727,17 @@ static void dequeue_task(struct rq *rq, struct task_struct *p, int flags)
> void activate_task(struct rq *rq, struct task_struct *p, int flags)
> {
> if (task_contributes_to_load(p))
> - rq->nr_uninterruptible--;
> + cpu_rq(p->on_cpu_uninterruptible)->nr_uninterruptible--;
>
> enqueue_task(rq, p, flags);
> }

That's completely broken, you cannot do non-atomic cross-cpu
modifications like that. Also, adding an atomic op to the wakeup/sleep
paths isn't going to be popular at all.

> void deactivate_task(struct rq *rq, struct task_struct *p, int flags)
> {
> - if (task_contributes_to_load(p))
> - rq->nr_uninterruptible++;
> + if (task_contributes_to_load(p)) {
> + task_rq(p)->nr_uninterruptible++;
> + p->on_cpu_uninterruptible = task_cpu(p);
> + }
>
> dequeue_task(rq, p, flags);
> }

This looks pointless, at deactivate time task_rq() had better be rq or
something is terribly broken.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/