Re: [PATCH 0/6] Per-processor private data areas for i386

From: Rusty Russell
Date: Thu Sep 28 2006 - 20:22:48 EST

On Wed, 2006-09-27 at 13:28 -0700, Jeremy Fitzhardinge wrote:
> Pavel Machek wrote:
> > So we have 4% slowdown...
> >
> Yes, that would be the worst-case slowdown in the hot-cache case.
> Rearranging the layout of the GDT would remove any theoretical
> cold-cache slowdown (I haven't measured if there's any impact in practice).
> Rusty has also done more comprehensive benchmarks with his variant of
> this patch series, and found no statistically interesting performance
> difference. Which is pretty much what I would expect, since it doesn't
> increase cache-misses at all.

OK, here are my null-syscall results. This is on a Intel(R) Pentium(R)
4 CPU 3.00GHz (stepping 9), single processor (SMP kernel).

I did three sets of tests: before, with saving/restoring %gs, with using
%gs for per-cpu vars and current and smp_processor_id().

swarm5.0:Simple syscall: 0.3734 microseconds
swarm5.1:Simple syscall: 0.3734 microseconds
swarm5.2:Simple syscall: 0.3734 microseconds
swarm5.3:Simple syscall: 0.3734 microseconds

With saving/restoring %gs: (per-cpu was same)
swarm5.4:Simple syscall: 0.3801 microseconds
swarm5.5:Simple syscall: 0.3801 microseconds
swarm5.6:Simple syscall: 0.3804 microseconds
swarm5.7:Simple syscall: 0.3801 microseconds

That's a 6.5ns cost for saving and restoring gs, and other lmbench
syscall benchmarks reflected similar differences where the noise didn't
overwhelm them.

On kernbench, the differences were in the noise.

Strangely, I see a 4% drop on fork+exec when I used gs for per-cpu vars,
which I am now investigating (71.0831 usec before, 71.1725 usec with
saving, 73.7458 usec with per-cpu!).

Help! Save Australia from the worst of the DMCA:

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at