[tip:core/locking] x86/smp: Move waiting on contended ticket lock out of line

From: tip-bot for Rik van Riel
Date: Wed Feb 13 2013 - 07:07:42 EST


Commit-ID: 4aef331850b637169ff036ed231f0d236874f310
Gitweb: http://git.kernel.org/tip/4aef331850b637169ff036ed231f0d236874f310
Author: Rik van Riel <riel@xxxxxxxxxx>
AuthorDate: Wed, 6 Feb 2013 15:04:03 -0500
Committer: Ingo Molnar <mingo@xxxxxxxxxx>
CommitDate: Wed, 13 Feb 2013 09:06:28 +0100

x86/smp: Move waiting on contended ticket lock out of line

Moving the wait loop for congested loops to its own function
allows us to add things to that wait loop, without growing the
size of the kernel text appreciably.

Signed-off-by: Rik van Riel <riel@xxxxxxxxxx>
Reviewed-by: Steven Rostedt <rostedt@xxxxxxxxxxxx>
Reviewed-by: Michel Lespinasse <walken@xxxxxxxxxx>
Reviewed-by: Rafael Aquini <aquini@xxxxxxxxxx>
Cc: eric.dumazet@xxxxxxxxx
Cc: lwoodman@xxxxxxxxxx
Cc: knoel@xxxxxxxxxx
Cc: chegu_vinod@xxxxxx
Cc: raghavendra.kt@xxxxxxxxxxxxxxxxxx
Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
Cc: Peter Zijlstra <a.p.zijlstra@xxxxxxxxx>
Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Link: http://lkml.kernel.org/r/20130206150403.006e5294@xxxxxxxxxxxxxxxxxxx
Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx>
---
arch/x86/include/asm/spinlock.h | 11 +++++------
arch/x86/kernel/smp.c | 14 ++++++++++++++
2 files changed, 19 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 33692ea..dc492f6 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -34,6 +34,8 @@
# define UNLOCK_LOCK_PREFIX
#endif

+extern void ticket_spin_lock_wait(arch_spinlock_t *, struct __raw_tickets);
+
/*
* Ticket locks are conceptually two parts, one indicating the current head of
* the queue, and the other indicating the current tail. The lock is acquired
@@ -53,12 +55,9 @@ static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)

inc = xadd(&lock->tickets, inc);

- for (;;) {
- if (inc.head == inc.tail)
- break;
- cpu_relax();
- inc.head = ACCESS_ONCE(lock->tickets.head);
- }
+ if (inc.head != inc.tail)
+ ticket_spin_lock_wait(lock, inc);
+
barrier(); /* make sure nothing creeps before the lock is taken */
}

diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c
index 48d2b7d..20da354 100644
--- a/arch/x86/kernel/smp.c
+++ b/arch/x86/kernel/smp.c
@@ -113,6 +113,20 @@ static atomic_t stopping_cpu = ATOMIC_INIT(-1);
static bool smp_no_nmi_ipi = false;

/*
+ * Wait on a congested ticket spinlock.
+ */
+void ticket_spin_lock_wait(arch_spinlock_t *lock, struct __raw_tickets inc)
+{
+ for (;;) {
+ cpu_relax();
+ inc.head = ACCESS_ONCE(lock->tickets.head);
+
+ if (inc.head == inc.tail)
+ break;
+ }
+}
+
+/*
* this function sends a 'reschedule' IPI to another CPU.
* it goes straight through and wastes no time serializing
* anything. Worst case is that we lose a reschedule ...
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/