[PATCH 04/26] staging/lustre/ldlm: Fix a race during FLock handling

From: Peng Tao
Date: Thu Nov 14 2013 - 11:50:50 EST


From: Bruno Faccini <bruno.faccini@xxxxxxxxx>

Protect against race where lock could have been just destroyed
due to overlap, in ldlm_process_flock_lock().
Easy reproducer is BULL's NFS Locktests in pthread mode.
(http://nfsv4.bullopensource.org/tools/tests/locktest.php)

Lustre-change: http://review.whamcloud.com/7134
Intel-bug-id: https://jira.hpdd.intel.com/browse/LU-1126
Signed-off-by: Bruno Faccini <bruno.faccini@xxxxxxxxx>
Reviewed-by: Oleg Drokin <oleg.drokin@xxxxxxxxx>
Reviewed-by: John L. Hammond <john.hammond@xxxxxxxxx>
Signed-off-by: Peng Tao <bergwolf@xxxxxxxxx>
Signed-off-by: Andreas Dilger <andreas.dilger@xxxxxxxxx>
---
drivers/staging/lustre/lustre/ldlm/ldlm_flock.c | 15 ++++++++++-----
1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
index d4598b6..37ebd2a 100644
--- a/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
+++ b/drivers/staging/lustre/lustre/ldlm/ldlm_flock.c
@@ -653,11 +653,6 @@ ldlm_flock_completion_ast(struct ldlm_lock *lock, __u64 flags, void *data)
granted:
OBD_FAIL_TIMEOUT(OBD_FAIL_LDLM_CP_CB_WAIT, 10);

- if (lock->l_flags & LDLM_FL_DESTROYED) {
- LDLM_DEBUG(lock, "client-side enqueue waking up: destroyed");
- return 0;
- }
-
if (lock->l_flags & LDLM_FL_FAILED) {
LDLM_DEBUG(lock, "client-side enqueue waking up: failed");
return -EIO;
@@ -673,6 +668,16 @@ granted:

lock_res_and_lock(lock);

+
+ /* Protect against race where lock could have been just destroyed
+ * due to overlap in ldlm_process_flock_lock().
+ */
+ if (lock->l_flags & LDLM_FL_DESTROYED) {
+ unlock_res_and_lock(lock);
+ LDLM_DEBUG(lock, "client-side enqueue waking up: destroyed");
+ return 0;
+ }
+
/* take lock off the deadlock detection hash list. */
ldlm_flock_blocking_unlink(lock);

--
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/