Context switch times

From: Mike Kravetz (kravetz@us.ibm.com)
Date: Thu Oct 04 2001 - 16:04:17 EST


I've been working on a rewrite of our Multi-Queue scheduler
and am using the lat_ctx program of LMbench as a benchmark.
I'm lucky enough to have access to an 8-CPU system for use
during development. One time, I 'accidently' booted the
kernel that came with the distribution installed on this
machine. That kernel level is '2.2.16-22'. The results of
running lat-ctx on this kernel when compared to 2.4.10 really
surprised me. Here is an example:

2.4.10 on 8 CPUs: lat_ctx -s 0 -r 2 results
"size=0k ovr=2.27
2 3.86

2.2.16-22 on 8 CPUS: lat_ctx -s 0 -r 2 results
"size=0k ovr=1.99
2 1.44

As you can see, the context switch times for 2.4.10 are more
than double what they were for 2.2.16-22 in this example.

Comments?

One observation I did make is that this may be related to CPU
affinity/cache warmth. If you increase the number of 'TRIPS'
to a very large number, you can run 'top' and observe per-CPU
utilization. On 2.2.16-22, the '2 task' benchmark seemed to
stay on 3 of the 8 CPUs. On 2.4.10, these 2 tasks were run
on all 8 CPUs and utilization was about the same for each CPU.

-- 
Mike Kravetz                                  kravetz@us.ibm.com
IBM Peace, Love and Linux Technology Center
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Sun Oct 07 2001 - 21:00:34 EST