Re: [patch] CFS scheduler, -v6

From: Srivatsa Vaddagiri
Date: Sat Apr 28 2007 - 11:16:38 EST


On Sat, Apr 28, 2007 at 03:53:38PM +0200, Ingo Molnar wrote:
> > Won't it help if you update rq->rb_leftmost above from the value
> > returned by rb_first(), so that subsequent calls to first_fair will be
> > sped up?
>
> yeah, indeed. Would you like to do a patch for that?

My pleasure :)

With the patch below applied, I ran a "time -p make -s -j10 bzImage"
test.

2.6.20 + cfs-v6 -> 186.45 (real)
2.6.20 + cfs-v6 + this_patch -> 184.55 (real)

or about ~1% improvement in real wall-clock time. This was with the default
sched_granularity_ns of 6000000. I suspect larger the value of
sched_granularity_ns and the number of (SCHED_NORMAL) tasks in system, better
the benefit from this caching.


Cache value returned by rb_first(), for faster subsequent lookups.

Signed-off-by : Srivatsa Vaddagiri <vatsa@xxxxxxxxxx>


---


diff -puN kernel/sched_fair.c~speedup kernel/sched_fair.c
--- linux-2.6.21/kernel/sched_fair.c~speedup 2007-04-28 19:28:08.000000000 +0530
+++ linux-2.6.21-vatsa/kernel/sched_fair.c 2007-04-28 19:34:55.000000000 +0530
@@ -86,7 +86,9 @@ static inline struct rb_node * first_fai
{
if (rq->rb_leftmost)
return rq->rb_leftmost;
- return rb_first(&rq->tasks_timeline);
+ /* Cache the value returned by rb_first() */
+ rq->rb_leftmost = rb_first(&rq->tasks_timeline);
+ return rq->rb_leftmost;
}

static struct task_struct * __pick_next_task_fair(struct rq *rq)
_






--
Regards,
vatsa
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/