Balancing near the locking cliff, with some numbers

From: Andi Kleen
Date: Fri Nov 04 2005 - 14:57:05 EST



I recently generated some data on how many locks/semaphores/atomics/memory
barriers simple system calls use. The original idea for this is from Dipankar,
who did similar things for an older kernel.

I thought some folks might find these numbers interesting.

Making the kernel more scalable requires more locks and atomic counts etc.,
but each lock even when uncontended adds more overhead (on x86 everything
with a LOCK prefix is costly, depending on the CPU). The locking cliff is when
you add some many locks that (a) nobody can understand/debug all
the dependencies anymore and when the basic performance gets worse
and worse due to locking overhead.

Here are some results:

open:
locks: 106 sems: 32 atomics: 50 rwlocks: 30 irqsaves: 89 barriers: 47
read:
locks: 52 sems: 0 atomics: 16 rwlocks: 12 irqsaves: 69 barriers: 11
write:
locks: 38 sems: 4 atomics: 20 rwlocks: 8 irqsaves: 42 barriers: 12
page fault:
locks: 4 sems: 2 atomics: 2 rwlocks: 0 irqsaves: 9 barriers: 0

open: open a file on ext3
read: read a cache cold file from disk
write: write a new file
page fault: fault in an empty page

Notes:
I generated the numbers by running an instrumented SMP kernel
on my P-M laptop. It's not from a real multi processor machine.
The number always count lock and unlock separately.
atomic_t is any atomic_t operation
barriers includes mb/wmb/rmb
irqsaves don't have a lock prefix, but at least one some CPUs they tend to be
quite slow too.

Summary:
I don't think Linux fell over the locking cliff yet. But it's getting
dangerously near. Before you add new locking or atomics always think twice
and only add them when there is really a very good reason for it.
Even on MP systems using less locks and atomics can be faster.

-Andi
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/