Re: Context switch times

From: Michailidis, Dimitrios (dm@desanasystems.com)
Date: Fri Oct 05 2001 - 01:31:02 EST


> I believe the 'quick and dirty' patches we came up with substantially
> increased context switch times for this benchmark (doubled them).
> The reason is that you needed to add IPI time to the context switch
> time. Therefore, I did not actively pursue getting these accepted. :)
> It appears that something in the 2.2 scheduler did a much better
> job of handling this situation. I haven't looked at the 2.2 code.
> Does anyone know what feature of the 2.2 scheduler was more successful
> in keeping tasks on the CPUs on which they previously executed?
> Also, why was that code removed from 2.4? I can research, but I
> suspect someone here has firsthand knowledge.

The reason 2.2 does better is because under some conditions if a woken up
process's preferred CPU is busy it will refrain from moving it to another
CPU even if there are many idle CPUs, in the hope that the preferred CPU
will become available soon. This can cause situations where processes are
sitting on the run queue while CPUs idle, but works great for lmbench. OTOH
2.4 assigns processes to CPUs as soon as possible. IIRC this change
happened in one of the early 2.3.4x kernels.

---
Dimitris Michailidis				 dm@desanasystems.com
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Sun Oct 07 2001 - 21:00:35 EST