Re: [RESEND][PATCH] cpuset: fix possible deadlock inasync_rebuild_sched_domains

From: Andrew Morton
Date: Mon Jan 26 2009 - 23:04:51 EST


On Tue, 20 Jan 2009 11:10:54 +0800 Miao Xie <miaox@xxxxxxxxxxxxxx> wrote:

> Lockdep reported some possible circular locking info when we tested cpuset on
> NUMA/fake NUMA box.
>
> =======================================================
> [ INFO: possible circular locking dependency detected ]
> 2.6.29-rc1-00224-ga652504 #111
> -------------------------------------------------------
> bash/2968 is trying to acquire lock:
> (events){--..}, at: [<ffffffff8024c8cd>] flush_work+0x24/0xd8
>
> but task is already holding lock:
> (cgroup_mutex){--..}, at: [<ffffffff8026ad1e>] cgroup_lock_live_group+0x12/0x29
>
> which lock already depends on the new lock.
> ......
> -------------------------------------------------------
>
> Steps to reproduce:
> # mkdir /dev/cpuset
> # mount -t cpuset xxx /dev/cpuset
> # mkdir /dev/cpuset/0
> # echo 0 > /dev/cpuset/0/cpus
> # echo 0 > /dev/cpuset/0/mems
> # echo 1 > /dev/cpuset/0/memory_migrate
> # cat /dev/zero > /dev/null &
> # echo $! > /dev/cpuset/0/tasks
>
> This is because async_rebuild_sched_domains has the following lock sequence:
> run_workqueue(async_rebuild_sched_domains)
> -> do_rebuild_sched_domains -> cgroup_lock
>
> But, attaching tasks when memory_migrate is set has following:
> cgroup_lock_live_group(cgroup_tasks_write)
> -> do_migrate_pages -> flush_work

Where is this flush_work() call? lru_add_drain_all()->schedule_on_each_cpu()?

If so, and if that is the only such callsite then we could/should
rework this code to use work_on_cpu(), if we manage to fix that thing.

It would be somewhat inefficient. It would be better if work_on_cpu()
were to take a cpumask argument, and avoid blocking behind each CPU one
at a time. But first things first.

> This patch fixes it by using a separate workqueue thread.

<wonders when RESERVED_PIDS became a logarithm>

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/