Re: 2.6.32-rc5-mmotm1101 - lockdep whinge during early boot

From: Rusty Russell
Date: Thu Nov 05 2009 - 04:11:16 EST

On Thu, 5 Nov 2009 02:41:24 am Valdis.Kletnieks@xxxxxx wrote:
[ 0.344147] swapper/1 is trying to acquire lock:
> [ 0.344154] (cpu_add_remove_lock){+.+.+.}, at: [<ffffffff8103c222>] cpu_maps_update_begin+0x12/0x14
> [ 0.344174]
> [ 0.344175] but task is already holding lock:
> [ 0.344183] (setup_lock){+.+.+.}, at: [<ffffffff81078755>] stop_machine_create+0x12/0x9b
> [ 0.344200]
> [ 0.344201] which lock already depends on the new lock.

Hi Vladis!

Sigh. I always find reading these a complete mindfuck.

stop_machine_create: setup_lock then cpu_add_remove_lock
(in create_workqueue_key() -> cpu_maps_update_begin())
clocksource_done_booting: clocksource_mutex then setup_lock
(in stop_machine_create(), as above)
cpu_up: cpu_add_remove_lock then clocksource_mutex
(in mark_tsc_unstable() -> clocksource_change_rating())

AFAICT this is our circular dependency. But I'm no closer to knowing how to
solve it.

Oleg (CC'd) made workqueues use cpu_maps_update_begin() instead of the
more obvious get_online_cpus() in 3da1c84c00c7e5f. Reverting that seems like
a bad idea.

Or, if the clocksource list wasn't ordered, we could change the rating without
a lock.

Either way, the locking shark is well and truly jumped...
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at