Re: [RFC][PATCH] CPUSets: Move most calls to rebuild_sched_domains() to the workqueue

From: Paul Menage
Date: Thu Jun 26 2008 - 05:50:39 EST


On Thu, Jun 26, 2008 at 2:34 AM, Vegard Nossum <vegard.nossum@xxxxxxxxx> wrote:
> On Thu, Jun 26, 2008 at 9:56 AM, Paul Menage <menage@xxxxxxxxxx> wrote:
>> CPUsets: Move most calls to rebuild_sched_domains() to the workqueue
>>
>> In the current cpusets code the lock nesting between cgroup_mutex and
>> cpuhotplug.lock when calling rebuild_sched_domains is inconsistent -
>> in the CPU hotplug path cpuhotplug.lock nests outside cgroup_mutex,
>> and in all other paths that call rebuild_sched_domains() it nests
>> inside.
>>
>> This patch makes most calls to rebuild_sched_domains() asynchronous
>> via the workqueue, which removes the nesting of the two locks in that
>> case. In the case of an actual hotplug event, cpuhotplug.lock nests
>> outside cgroup_mutex as now.
>>
>> Signed-off-by: Paul Menage <menage@xxxxxxxxxx>
>>
>> ---
>>
>> Note that all I've done with this patch is verify that it compiles
>> without warnings; I'm not sure how to trigger a hotplug event to test
>> the lock dependencies or verify that scheduler domain support is still
>> behaving correctly. Vegard, does this fix the problems that you were
>> seeing? Paul/Max, does this still seem sane with regard to scheduler
>> domains?
>
> Nope, sorry :-(
>
> =======================================================
> [ INFO: possible circular locking dependency detected ]
> 2.6.26-rc8-dirty #39
> -------------------------------------------------------
> bash/3510 is trying to acquire lock:
> (events){--..}, at: [<c0145690>] cleanup_workqueue_thread+0x10/0x70
>
> but task is already holding lock:
> (&cpu_hotplug.lock){--..}, at: [<c015d9da>] cpu_hotplug_begin+0x1a/0x50
>
> which lock already depends on the new lock.
>

Does that mean that you can't ever call get_online_cpus() from a
workqueue thread?

Paul.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/