[PATCH 2/4] locking/rwsem: Drop superfluous waiter refcount

From: Davidlohr Bueso
Date: Mon May 09 2016 - 01:00:55 EST


Read waiters are currently reference counted from the time it enters
the slowpath until the lock is released and the waiter is awoken. This
is fragile and superfluous considering everything occurs within down_read()
without returning to the caller, and the very nature of the primitive does
not suggest that the task can disappear from underneath us. In addition,
spurious wakeups can make the whole refcount useless as get_task_struct()
is only called when setting up the waiter.

Signed-off-by: Davidlohr Bueso <dave@xxxxxxxxxxxx>
---
kernel/locking/rwsem-xadd.c | 2 --
1 file changed, 2 deletions(-)

diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
index 7d62772600cf..b592bb48d880 100644
--- a/kernel/locking/rwsem-xadd.c
+++ b/kernel/locking/rwsem-xadd.c
@@ -197,7 +197,6 @@ __rwsem_do_wake(struct rw_semaphore *sem, enum rwsem_wake_type wake_type)
smp_mb();
waiter->task = NULL;
wake_up_process(tsk);
- put_task_struct(tsk);
} while (--loop);

sem->wait_list.next = next;
@@ -220,7 +219,6 @@ struct rw_semaphore __sched *rwsem_down_read_failed(struct rw_semaphore *sem)
/* set up my own style of waitqueue */
waiter.task = tsk;
waiter.type = RWSEM_WAITING_FOR_READ;
- get_task_struct(tsk);

raw_spin_lock_irq(&sem->wait_lock);
if (list_empty(&sem->wait_list))
--
2.8.1