Re: [RFC PATCH 1/3] x86,smp: move waiting on contended lock out ofline

From: Rik van Riel
Date: Fri Dec 21 2012 - 23:44:39 EST


On 12/21/2012 11:40 PM, Michel Lespinasse wrote:
On Fri, Dec 21, 2012 at 3:50 PM, Rik van Riel <riel@xxxxxxxxxx> wrote:

@@ -53,12 +55,11 @@ static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)

inc = xadd(&lock->tickets, inc);

+ if (inc.head == inc.tail)
+ goto out;
+
+ ticket_spin_lock_wait(lock, inc);
+ out:

why not just:

if (inc.head != inc.tail)
ticket_spin_lock_wait(lock, inc)

That makes the code nicer, thank you. Applied.

+++ b/arch/x86/kernel/smp.c
@@ -113,6 +113,20 @@ static atomic_t stopping_cpu = ATOMIC_INIT(-1);
static bool smp_no_nmi_ipi = false;

/*
+ * Wait on a congested ticket spinlock.
+ */
+void ticket_spin_lock_wait(arch_spinlock_t *lock, struct __raw_tickets inc)
+{
+ for (;;) {
+ cpu_relax();
+ inc.head = ACCESS_ONCE(lock->tickets.head);
+
+ if (inc.head == inc.tail)
+ break;
+ }

Why not just:

do {
cpu_relax()
inc.head = ...
} while (inc.head != inc.tail);


Other than that, no problems with the principle of it.

In patch #3 I do something else inside the head == tail
conditional block, so this one is best left alone.

Thank you for the comments.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/