[039/251] x25: Fix broken locking in ioctl error paths.

From: Steven Rostedt
Date: Wed Sep 11 2013 - 00:38:55 EST


3.6.11.9-rc1 stable review patch.
If anyone has any objections, please let me know.

------------------

From: Dave Jones <davej@xxxxxxxxxx>

[ Upstream commit 4ccb93ce7439b63c31bc7597bfffd13567fa483d ]

Two of the x25 ioctl cases have error paths that break out of the function without
unlocking the socket, leading to this warning:

================================================
[ BUG: lock held when returning to user space! ]
3.10.0-rc7+ #36 Not tainted
------------------------------------------------
trinity-child2/31407 is leaving the kernel with locks still held!
1 lock held by trinity-child2/31407:
#0: (sk_lock-AF_X25){+.+.+.}, at: [<ffffffffa024b6da>] x25_ioctl+0x8a/0x740 [x25]

Signed-off-by: Dave Jones <davej@xxxxxxxxxx>
Signed-off-by: David S. Miller <davem@xxxxxxxxxxxxx>
Signed-off-by: Steven Rostedt <rostedt@xxxxxxxxxxx>
---
net/x25/af_x25.c | 15 ++++++++-------
1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/net/x25/af_x25.c b/net/x25/af_x25.c
index a306bc6..b943e3e 100644
--- a/net/x25/af_x25.c
+++ b/net/x25/af_x25.c
@@ -1586,11 +1586,11 @@ out_cud_release:
case SIOCX25CALLACCPTAPPRV: {
rc = -EINVAL;
lock_sock(sk);
- if (sk->sk_state != TCP_CLOSE)
- break;
- clear_bit(X25_ACCPT_APPRV_FLAG, &x25->flags);
+ if (sk->sk_state == TCP_CLOSE) {
+ clear_bit(X25_ACCPT_APPRV_FLAG, &x25->flags);
+ rc = 0;
+ }
release_sock(sk);
- rc = 0;
break;
}

@@ -1598,14 +1598,15 @@ out_cud_release:
rc = -EINVAL;
lock_sock(sk);
if (sk->sk_state != TCP_ESTABLISHED)
- break;
+ goto out_sendcallaccpt_release;
/* must call accptapprv above */
if (test_bit(X25_ACCPT_APPRV_FLAG, &x25->flags))
- break;
+ goto out_sendcallaccpt_release;
x25_write_internal(sk, X25_CALL_ACCEPTED);
x25->state = X25_STATE_3;
- release_sock(sk);
rc = 0;
+out_sendcallaccpt_release:
+ release_sock(sk);
break;
}

--
1.7.10.4


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/