[PATCH v5 2/3] locking/mutex: Enable optimistic spinning of woken task in wait queue

From: Waiman Long
Date: Wed Aug 10 2016 - 14:26:01 EST


Ding Tianhong reported a live-lock situation where a constant stream
of incoming optimistic spinners blocked a task in the wait list from
getting the mutex.

This patch attempts to alleviate this live-lock condition by enabling
the woken task in the wait queue to enter into an optimistic spinning
loop itself in parallel with the regular spinners in the OSQ. This
help to reduce the live-locking chance.

Running the AIM7 benchmarks on a 4-socket E7-4820 v3 system (with ext4
filesystem), the additional spinning of the waiter-spinning improved
performance for the following workloads at high user count:

Workload % Improvement
-------- -------------
alltests 3.9%
disk 3.4%
fserver 2.0%
long 3.8%
new_fserver 10.5%

The other workloads were about the same as before.

Signed-off-by: Waiman Long <Waiman.Long@xxxxxxx>
---
kernel/locking/mutex.c | 13 ++++++++++++-
1 files changed, 12 insertions(+), 1 deletions(-)

diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 3bcbbd1..15b521a 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -531,6 +531,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
struct task_struct *task = current;
struct mutex_waiter waiter;
unsigned long flags;
+ bool acquired = false; /* True if the lock is acquired */
int ret;

if (use_ww_ctx) {
@@ -567,7 +568,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,

lock_contended(&lock->dep_map, ip);

- for (;;) {
+ while (!acquired) {
/*
* Lets try to take the lock again - this is needed even if
* we get here for the first time (shortly after failing to
@@ -602,6 +603,12 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
/* didn't get the lock, go to sleep: */
spin_unlock_mutex(&lock->wait_lock, flags);
schedule_preempt_disabled();
+
+ /*
+ * Optimistically spinning on the mutex without the wait lock.
+ */
+ acquired = mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx,
+ true);
spin_lock_mutex(&lock->wait_lock, flags);
}
__set_task_state(task, TASK_RUNNING);
@@ -612,6 +619,9 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
atomic_set(&lock->count, 0);
debug_mutex_free_waiter(&waiter);

+ if (acquired)
+ goto unlock;
+
skip_wait:
/* got the lock - cleanup and rejoice! */
lock_acquired(&lock->dep_map, ip);
@@ -622,6 +632,7 @@ skip_wait:
ww_mutex_set_context_slowpath(ww, ww_ctx);
}

+unlock:
spin_unlock_mutex(&lock->wait_lock, flags);
preempt_enable();
return 0;
--
1.7.1