[ 04/48] libceph: must hold mutex for reset_changed_osds()

From: Greg Kroah-Hartman
Date: Tue Jun 18 2013 - 12:33:57 EST


From: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>

3.9-stable review patch. If anyone has any objections, please let me know.

------------------

From: Alex Elder <elder@xxxxxxxxxxx>

commit 14d2f38df67fadee34625fcbd282ee22514c4846 upstream.

An osd client has a red-black tree describing its osds, and
occasionally we would get crashes due to one of these trees tree
becoming corrupt somehow.

The problem turned out to be that reset_changed_osds() was being
called without protection of the osd client request mutex. That
function would call __reset_osd() for any osd that had changed, and
__reset_osd() would call __remove_osd() for any osd with no
outstanding requests, and finally __remove_osd() would remove the
corresponding entry from the red-black tree. Thus, the tree was
getting modified without having any lock protection, and was
vulnerable to problems due to concurrent updates.

This appears to be the only osd tree updating path that has this
problem. It can be fairly easily fixed by moving the call up
a few lines, to just before the request mutex gets dropped
in kick_requests().

This resolves:
http://tracker.ceph.com/issues/5043

Signed-off-by: Alex Elder <elder@xxxxxxxxxxx>
Reviewed-by: Sage Weil <sage@xxxxxxxxxxx>
Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>

---
net/ceph/osd_client.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/net/ceph/osd_client.c
+++ b/net/ceph/osd_client.c
@@ -1399,13 +1399,13 @@ static void kick_requests(struct ceph_os
__register_request(osdc, req);
__unregister_linger_request(osdc, req);
}
+ reset_changed_osds(osdc);
mutex_unlock(&osdc->request_mutex);

if (needmap) {
dout("%d requests for down osds, need new map\n", needmap);
ceph_monc_request_next_osdmap(&osdc->client->monc);
}
- reset_changed_osds(osdc);
}




--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/