Re: [PATCH v7 5/6] MCS Lock: allow architectures to hook in tocontended paths

From: Paul E. McKenney
Date: Sun Jan 19 2014 - 21:35:04 EST


On Thu, Jan 16, 2014 at 04:08:31PM -0800, Tim Chen wrote:
> When contended, architectures may be able to reduce the polling overhead
> in ways which aren't expressible using a simple relax() primitive.
>
> This patch allows architectures to hook into the mcs_{lock,unlock}
> functions for the contended cases only.
>
> From: Will Deacon <will.deacon@xxxxxxx>
> Signed-off-by: Will Deacon <will.deacon@xxxxxxx>

Reviewed-by: Paul E. McKenney <paulmck@xxxxxxxxxxxxxxxxxx>

> ---
> kernel/locking/mcs_spinlock.c | 47 +++++++++++++++++++++++++------------------
> 1 file changed, 27 insertions(+), 20 deletions(-)
>
> diff --git a/kernel/locking/mcs_spinlock.c b/kernel/locking/mcs_spinlock.c
> index 6cdc730..66d8883 100644
> --- a/kernel/locking/mcs_spinlock.c
> +++ b/kernel/locking/mcs_spinlock.c
> @@ -7,19 +7,34 @@
> * It avoids expensive cache bouncings that common test-and-set spin-lock
> * implementations incur.
> */
> -/*
> - * asm/processor.h may define arch_mutex_cpu_relax().
> - * If it is not defined, cpu_relax() will be used.
> - */
> +
> #include <asm/barrier.h>
> #include <asm/cmpxchg.h>
> #include <asm/processor.h>
> #include <linux/compiler.h>
> #include <linux/mcs_spinlock.h>
> +#include <linux/mutex.h>
> #include <linux/export.h>
>
> -#ifndef arch_mutex_cpu_relax
> -# define arch_mutex_cpu_relax() cpu_relax()
> +#ifndef arch_mcs_spin_lock_contended
> +/*
> + * Using smp_load_acquire() provides a memory barrier that ensures
> + * subsequent operations happen after the lock is acquired.
> + */
> +#define arch_mcs_spin_lock_contended(l) \
> + while (!(smp_load_acquire(l))) { \
> + arch_mutex_cpu_relax(); \
> + }
> +#endif
> +
> +#ifndef arch_mcs_spin_unlock_contended
> +/*
> + * smp_store_release() provides a memory barrier to ensure all
> + * operations in the critical section has been completed before
> + * unlocking.
> + */
> +#define arch_mcs_spin_unlock_contended(l) \
> + smp_store_release((l), 1)
> #endif
>
> /*
> @@ -43,13 +58,9 @@ void mcs_spin_lock(struct mcs_spinlock **lock, struct mcs_spinlock *node)
> return;
> }
> ACCESS_ONCE(prev->next) = node;
> - /*
> - * Wait until the lock holder passes the lock down.
> - * Using smp_load_acquire() provides a memory barrier that
> - * ensures subsequent operations happen after the lock is acquired.
> - */
> - while (!(smp_load_acquire(&node->locked)))
> - arch_mutex_cpu_relax();
> +
> + /* Wait until the lock holder passes the lock down. */
> + arch_mcs_spin_lock_contended(&node->locked);
> }
> EXPORT_SYMBOL_GPL(mcs_spin_lock);
>
> @@ -71,12 +82,8 @@ void mcs_spin_unlock(struct mcs_spinlock **lock, struct mcs_spinlock *node)
> while (!(next = ACCESS_ONCE(node->next)))
> arch_mutex_cpu_relax();
> }
> - /*
> - * Pass lock to next waiter.
> - * smp_store_release() provides a memory barrier to ensure
> - * all operations in the critical section has been completed
> - * before unlocking.
> - */
> - smp_store_release(&next->locked, 1);
> +
> + /* Pass lock to next waiter. */
> + arch_mcs_spin_unlock_contended(&next->locked);
> }
> EXPORT_SYMBOL_GPL(mcs_spin_unlock);
> --
> 1.7.11.7
>
>
>

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/