Re: [PATCH 12/13] locking/qspinlock: separate pv_wait_node from the non-paravirt path

From: Peter Zijlstra
Date: Tue Jul 05 2022 - 13:34:45 EST


On Tue, Jul 05, 2022 at 12:38:19AM +1000, Nicholas Piggin wrote:
> pv_wait_node waits until node->locked is non-zero, no need for the
> pv case to wait again by also executing the !pv code path.
>
> Signed-off-by: Nicholas Piggin <npiggin@xxxxxxxxx>
> ---
> kernel/locking/qspinlock.c | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
> index 9db168753124..19e2f286be0a 100644
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -862,10 +862,11 @@ static inline void queued_spin_lock_mcs_queue(struct qspinlock *lock, bool parav
> /* Link @node into the waitqueue. */
> WRITE_ONCE(prev->next, node);
>
> + /* Wait for mcs node lock to be released */
> if (paravirt)
> pv_wait_node(node, prev);
> - /* Wait for mcs node lock to be released */
> - smp_cond_load_acquire(&node->locked, VAL);
> + else
> + smp_cond_load_acquire(&node->locked, VAL);
>

(from patch #6):

+static void pv_wait_node(struct qnode *node, struct qnode *prev)
+{
+ int loop;
+ bool wait_early;
+
...
+
+ /*
+ * By now our node->locked should be 1 and our caller will not actually
+ * spin-wait for it. We do however rely on our caller to do a
+ * load-acquire for us.
+ */
+}