Re: [RFC PATCH] cifs: Fix possible deadlock with cifs and work queues

From: Tejun Heo
Date: Wed Mar 19 2014 - 16:28:54 EST


Hello, Steven, Peter.

On Wed, Mar 19, 2014 at 08:34:07PM +0100, Peter Zijlstra wrote:
> The way I understand workqueues is that we cannot guarantee concurrency
> like this. It tries, but there's no guarantee.

So, the guarantee is that if a workqueue has WQ_MEM_RECLAIM, it'll
always have at least one worker thread working on it, so workqueues
which may be depended upon during memory reclaim should have the flag
set and must not require more than single level of concurrency to make
forward progress. Workqueues w/o memory reclaim set depend on the
fact that eventually memory will be reclaimed and enough number of
workers necessary to make forward progress will be made available.

> WQ_MAX_ACTIVE seems to be a hard upper limit of concurrent workers. So
> given 511 other blocked works, the described problem will always happen.

That actually is per-workqueue limit and workqueue core will try to
create as many workers as possible to satisfy the demanded
concurrency. ie. having two workqueues with the same max_active means
that the total number of workers may reach 2 * max_active; however,
this is no guarantee. If the system is under memory pressure and the
workqueues don't have MEM_RECLAIM set, they may not get any
concurrency until more memory is made available.

> Creating another workqueue doesn't actually create more threads.

It looks like the issue Steven is describing is caused by having a
dependency chain longer than 1 through rwsem in a MEM_RECLAIM
workqueue. Moving the write work items to a separate workqueue breaks
the r-w-r chain and ensures that forward progress can be made with
single level of concurrency on both workqueues, so, yeah, it looks
like the correct fix to me. It it scarily subtle tho and quite likely
to present in other code paths too. :(

Thanks.

--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/