2.6.36-rc7: inconsistent lock state: inconsistent {IN-RECLAIM_FS-R}-> {RECLAIM_FS-ON-W} usage.

From: Dave Jones
Date: Mon Oct 11 2010 - 16:10:21 EST


We got this report from a Fedora user (cc'd) today..

Dave

> Noticed this when I tried to 'unmount/safely remove' a LUKS encrypted USB hard drive
>
> =================================
> [ 4960.442217] [ INFO: inconsistent lock state ]
> [ 4960.442238] 2.6.36-0.35.rc7.git0.fc15.x86_64 #1
> [ 4960.442258] ---------------------------------
> [ 4960.442278] inconsistent {IN-RECLAIM_FS-R} -> {RECLAIM_FS-ON-W} usage.
> [ 4960.442323] umount/8522 [HC0[0]:SC0[0]:HE1:SE1] takes:
> [ 4960.442350] (iprune_sem){+++++-}, at: [<ffffffff8113ee55>] invalidate_inodes+0x3e/0x12c
> [ 4960.442402] {IN-RECLAIM_FS-R} state was registered at:
> [ 4960.442427] [<ffffffff8107f089>] __lock_acquire+0x3f2/0xd79
> [ 4960.442464] [<ffffffff8107feeb>] lock_acquire+0xd2/0xfd
> [ 4960.442499] [<ffffffff8149cfce>] down_read+0x51/0x84
> [ 4960.442533] [<ffffffff8113f197>] shrink_icache_memory+0x4e/0x1ee
> [ 4960.442571] [<ffffffff810ef69d>] shrink_slab+0xe0/0x166
> [ 4960.442606] [<ffffffff810f1b5b>] balance_pgdat+0x257/0x3fe
> [ 4960.442641] [<ffffffff810f1f38>] kswapd+0x236/0x24c
> [ 4960.442671] [<ffffffff8106c61c>] kthread+0x9d/0xa5
> [ 4960.442699] [<ffffffff8100ab64>] kernel_thread_helper+0x4/0x10
> [ 4960.442733] irq event stamp: 5383
> [ 4960.442747] hardirqs last enabled at (5383): [<ffffffff8149e5d8>] _raw_spin_unlock_irq+0x30/0x3a
> [ 4960.442801] hardirqs last disabled at (5382): [<ffffffff8149ddaa>] _raw_spin_lock_irq+0x21/0x75
> [ 4960.442855] softirqs last enabled at (5358): [<ffffffff81148edd>] bdi_queue_work+0xc1/0xca
> [ 4960.442907] softirqs last disabled at (5354): [<ffffffff8149df8d>] _raw_spin_lock_bh+0x1c/0x6e
> [ 4960.442959]
> [ 4960.442959] other info that might help us debug this:
> [ 4960.442991] 3 locks held by umount/8522:
> [ 4960.443011] #0: (&type->s_umount_key#34){+++++.}, at: [<ffffffff8112dae5>] deactivate_super+0x32/0x3e
> [ 4960.443063] #1: (iprune_sem){+++++-}, at: [<ffffffff8113ee55>] invalidate_inodes+0x3e/0x12c
> [ 4960.443063] #2: (&fsnotify_mark_srcu){.+.+.+}, at: [<ffffffff8115896d>] fsnotify+0x7e/0x3d7
> [ 4960.443063]
> [ 4960.443063] stack backtrace:
> [ 4960.443063] Pid: 8522, comm: umount Not tainted 2.6.36-0.35.rc7.git0.fc15.x86_64 #1
> [ 4960.443063] Call Trace:
> [ 4960.443063] [<ffffffff8107eaad>] valid_state+0x17c/0x18e
> [ 4960.443063] [<ffffffff8107e3f9>] ? check_usage_backwards+0x0/0x81
> [ 4960.443063] [<ffffffff8107ebdc>] mark_lock+0x11d/0x1d8
> [ 4960.443063] [<ffffffff810801dc>] mark_held_locks+0x50/0x72
> [ 4960.443063] [<ffffffff8108079e>] lockdep_trace_alloc+0xa2/0xc5
> [ 4960.443063] [<ffffffff8111cbe7>] kmem_cache_alloc+0x36/0x159
> [ 4960.443063] [<ffffffff8115968f>] ? fsnotify_create_event+0x3f/0x1bc
> [ 4960.444138] [<ffffffff8115968f>] fsnotify_create_event+0x3f/0x1bc
> [ 4960.444138] [<ffffffff81158817>] send_to_group+0x159/0x1fc
> [ 4960.444138] [<ffffffff8107ff07>] ? lock_acquire+0xee/0xfd
> [ 4960.444138] [<ffffffff8115896d>] ? fsnotify+0x7e/0x3d7
> [ 4960.444138] [<ffffffff81158b68>] fsnotify+0x279/0x3d7
> [ 4960.449291] [<ffffffff81159e81>] fsnotify_unmount_inodes+0xbb/0xfe
> [ 4960.449291] [<ffffffff8113ee69>] invalidate_inodes+0x52/0x12c
> [ 4960.449291] [<ffffffff81154992>] ? __sync_blockdev+0x32/0x34
> [ 4960.449291] [<ffffffff8112cc66>] generic_shutdown_super+0x4c/0xe1
> [ 4960.449291] [<ffffffff8112cd22>] kill_block_super+0x27/0x3f
> [ 4960.449291] [<ffffffff8112cf9d>] deactivate_locked_super+0x26/0x46
> [ 4960.449291] [<ffffffff8112daed>] deactivate_super+0x3a/0x3e
> [ 4960.449291] [<ffffffff81142724>] mntput_no_expire+0xa7/0xd3
> [ 4960.449291] [<ffffffff81143265>] sys_umount+0x2cf/0x301
> [ 4960.449291] [<ffffffff81080309>] ? trace_hardirqs_on_caller+0x10b/0x12f
> [ 4960.459556] [<ffffffff81009cf2>]> system_call_fastpath+0x16/0x1b

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/