Re: [PATCH 1/2] mutex: In mutex_spin_on_owner(), return true when owner changes

From: Davidlohr Bueso
Date: Wed Feb 04 2015 - 23:36:29 EST


On Mon, 2015-02-02 at 13:59 -0800, Jason Low wrote:
> In the mutex_spin_on_owner(), we return true only if lock->owner == NULL.
> This was beneficial in situations where there were multiple threads
> simultaneously spinning for the mutex. If another thread got the lock
> while other spinner(s) were also doing mutex_spin_on_owner(), then the
> other spinners would stop spinning. This workaround helped reduce the
> chance that many spinners were simultaneously spinning for the mutex
> which can help reduce contention in highly contended cases.
>
> However, recent changes were made to the optimistic spinning code such
> that instead of having all spinners simultaneously spin for the mutex,
> we queue the spinners with an MCS lock such that only one thread spins
> for the mutex at a time. Furthermore, the OSQ optimizations ensure that
> spinners in the queue will stop waiting if it needs to reschedule.
>
> Now, we don't have to worry about multiple threads spinning on owner
> at the same time, and if lock->owner is not NULL at this point, it likely
> means another thread happens to obtain the lock in the fastpath. In this
> case, it would make sense for the spinner to continue spinning as long
> as the spinner doesn't need to schedule and the mutex owner is running.
>
> This patch changes this so that mutex_spin_on_owner() returns true when
> the lock owner changes, which means a thread will only stop spinning
> if it either needs to reschedule or if the lock owner is not running.
>
> We saw up to a 5% performance improvement in the fserver workload with
> this patch.

As pointed out, this fits in nicely with what we're doing with rwsems.
Based on our recent discussions:

> Signed-off-by: Jason Low <jason.low2@xxxxxx>

Acked-by: Davidlohr Bueso <dave@xxxxxxxxxxxx>

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/