[PATCH 0/6] Optimize the cpu hotplug locking -v2

From: Peter Zijlstra
Date: Tue Oct 08 2013 - 06:43:13 EST


The current cpu hotplug lock is a single global lock; therefore excluding
hotplug is a very expensive proposition even though it is rare occurrence under
normal operation.

There is a desire for a more light weight implementation of
{get,put}_online_cpus() from both the NUMA scheduling as well as the -RT side.

The current hotplug lock is a full reader preference lock -- and thus supports
reader recursion. However since we're making the read side lock much cheaper it
is the expectation that it will also be used far more. Which in turn would lead
to writer starvation.

Therefore the new lock proposed is completely fair; albeit somewhat expensive
on the write side. This in turn means that we need a per-task nesting count to
support reader recursion.

This patch-set is in 3 parts;

1) The new hotplug lock implementation, very fast on the read side,
very expensive on the write side; patch 1

2) Some new rcu_sync primitive by Oleg; patches 2,4-6

3) Employment of the rcy_sync thingy to optimize the write-side of the
new hotplug lock; patch 3


Barring further comments/objections I'll stuff this lot into -tip and go look
at getting the numa bits into there too.

Changes since -v1:
- Addressed all comments from Paul
- Actually made sure to send the version that builds
- Added a few patches from Oleg extending the rcu_sync primitive
- Added Reviewed-by for Oleg to patches 1,3 -- please holler if you disagree!


---
include/linux/rcusync.h | 57 +++++++++++
kernel/rcusync.c | 152 +++++++++++++++++++++++++++++++
include/linux/cpu.h | 68 +++++++++++++-
include/linux/sched.h | 3
kernel/Makefile | 3
kernel/cpu.c | 223 +++++++++++++++++++++++++++++++++-------------
kernel/sched/core.c | 2
7 files changed, 444 insertions(+), 64 deletions(-)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/