Re: [PATCH v2] locking/mutex: Prevent lock starvation when spinning is enabled

From: Waiman Long
Date: Thu Aug 11 2016 - 11:40:41 EST


On 08/10/2016 02:44 PM, Jason Low wrote:
Imre reported an issue where threads are getting starved when trying
to acquire a mutex. Threads acquiring a mutex can get arbitrarily delayed
sleeping on a mutex because other threads can continually steal the lock
in the fastpath and/or through optimistic spinning.

Waiman has developed patches that allow waiters to return to optimistic
spinning, thus reducing the probability that starvation occurs. However,
Imre still sees this starvation problem in the workloads when optimistic
spinning is disabled.

This patch adds an additional boolean to the mutex that gets used in
the CONFIG_SMP&& !CONFIG_MUTEX_SPIN_ON_OWNER cases. The flag signifies
whether or not other threads need to yield to a waiter and gets set
when a waiter spends too much time waiting for the mutex. The threshold
is currently set to 16 wakeups, and once the wakeup threshold is exceeded,
other threads must yield to the top waiter. The flag gets cleared
immediately after the top waiter acquires the mutex.

This prevents waiters from getting starved without sacrificing much
much performance, as lock stealing is still allowed and only
temporarily disabled when it is detected that a waiter has been waiting
for too long.

Reported-by: Imre Deak<imre.deak@xxxxxxxxx>
Signed-off-by: Jason Low<jason.low2@xxxxxxx>
---
v1->v2:
- Addressed Waiman's suggestions of needing the yield_to_waiter
flag only in the CONFIG_SMP case.

- Make sure to only clear the flag if the thread is the top waiter.

- Refactor code to clear flag into an inline function.

- Rename 'loops' variable name to 'wakeups'.

include/linux/mutex.h | 2 ++
kernel/locking/mutex.c | 83 +++++++++++++++++++++++++++++++++++++++++++-------
2 files changed, 74 insertions(+), 11 deletions(-)

diff --git a/include/linux/mutex.h b/include/linux/mutex.h
index 2cb7531..5643a233 100644
--- a/include/linux/mutex.h
+++ b/include/linux/mutex.h
@@ -57,6 +57,8 @@ struct mutex {
#endif
#ifdef CONFIG_MUTEX_SPIN_ON_OWNER
struct optimistic_spin_queue osq; /* Spinner MCS lock */
+#elif defined(CONFIG_SMP)
+ bool yield_to_waiter; /* Prevent starvation when spinning disabled */
#endif
#ifdef CONFIG_DEBUG_MUTEXES
void *magic;
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index a70b90d..faf31a0 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -55,6 +55,8 @@ __mutex_init(struct mutex *lock, const char *name, struct lock_class_key *key)
mutex_clear_owner(lock);
#ifdef CONFIG_MUTEX_SPIN_ON_OWNER
osq_lock_init(&lock->osq);
+#elif defined(CONFIG_SMP)
+ lock->yield_to_waiter = false;
#endif

debug_mutex_init(lock, name, key);
@@ -71,6 +73,9 @@ EXPORT_SYMBOL(__mutex_init);
*/
__visible void __sched __mutex_lock_slowpath(atomic_t *lock_count);

+
+static inline bool need_yield_to_waiter(struct mutex *lock);
+
/**
* mutex_lock - acquire the mutex
* @lock: the mutex to be acquired
@@ -99,7 +104,10 @@ void __sched mutex_lock(struct mutex *lock)
* The locking fastpath is the 1->0 transition from
* 'unlocked' into 'locked' state.
*/
- __mutex_fastpath_lock(&lock->count, __mutex_lock_slowpath);
+ if (!need_yield_to_waiter(lock))
+ __mutex_fastpath_lock(&lock->count, __mutex_lock_slowpath);
+ else
+ __mutex_lock_slowpath(&lock->count);
mutex_set_owner(lock);
}

@@ -398,12 +406,51 @@ done:

return false;
}
+
+static inline void do_yield_to_waiter(struct mutex *lock, int *wakeups)
+{
+ return;
+}
+
+static inline void clear_yield_to_waiter(struct mutex *lock)
+{
+ return;
+}
+
+static inline bool need_yield_to_waiter(struct mutex *lock)
+{
+ return false;
+}
+
#else
static bool mutex_optimistic_spin(struct mutex *lock,
struct ww_acquire_ctx *ww_ctx, const bool use_ww_ctx)
{
return false;
}
+
+#define MUTEX_WAKEUP_THRESHOLD 16
+
+static inline void do_yield_to_waiter(struct mutex *lock, int *wakeups)
+{
+ *wakeups += 1;
+
+ if (*wakeups< MUTEX_WAKEUP_THRESHOLD)
+ return;
+
+ if (lock->yield_to_waiter != true)
+ lock->yield_to_waiter = true;
+}
+
+static inline void clear_yield_to_waiter(struct mutex *lock)
+{
+ lock->yield_to_waiter = false;
+}
+
+static inline bool need_yield_to_waiter(struct mutex *lock)
+{
+ return lock->yield_to_waiter;
+}
#endif

_

The *yield* helper functions should be in a separate conditional compilation block as the declaration of yield_to_waiter may not match the helper functions with certain combination of config variables.

Something like

#if !defined(CONFIG_MUTEX_SPIN_ON_OWNER) && defined(CONFIG_SMP)
...
#else
...
#endif

Cheers,
Longman