Re: [PATCH 11/13] locking/qspinlock: Use queued_spin_trylock in pv_hybrid_queued_unfair_trylock

From: Waiman Long
Date: Tue Jul 05 2022 - 16:15:28 EST


On 7/4/22 10:38, Nicholas Piggin wrote:
Rather than open-code it as necessitated by the old function-renaming
code generation that rendered queued_spin_trylock unavailable to use
here.

Signed-off-by: Nicholas Piggin <npiggin@xxxxxxxxx>
---
kernel/locking/qspinlock.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index cef0ca7d94e1..9db168753124 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -357,7 +357,7 @@ static inline bool pv_hybrid_queued_unfair_trylock(struct qspinlock *lock)
int val = atomic_read(&lock->val);
if (!(val & _Q_LOCKED_PENDING_MASK) &&
- (cmpxchg_acquire(&lock->locked, 0, _Q_LOCKED_VAL) == 0)) {
+ queued_spin_trylock(lock)) {
lockevent_inc(pv_lock_stealing);
return true;
}

I am not sure if the compiler will eliminate the duplicated atomic_read() in queued_spin_trylock(). So unless it can generate the same code, I would prefer to leave this alone.

Cheers,
Longman