Re: Deadlocks due to per-process plugging

From: Mike Galbraith
Date: Sun Jul 22 2012 - 14:43:11 EST


On Sat, 2012-07-21 at 09:47 +0200, Mike Galbraith wrote:
> On Wed, 2012-07-18 at 07:30 +0200, Mike Galbraith wrote:
> > On Wed, 2012-07-18 at 06:44 +0200, Mike Galbraith wrote:
> >
> > > The patch in question for missing Cc. Maybe should be only mutex, but I
> > > see no reason why IO dependency can only possibly exist for mutexes...
> >
> > Well that was easy, box quickly said "nope, mutex only does NOT cut it".
>
> And I also learned (ouch) that both doesn't cut it either. Ksoftirqd
> (or sirq-blk) being nailed by q->lock in blk_done_softirq() is.. not
> particularly wonderful. As long as that doesn't happen, IO deadlock
> doesn't happen, troublesome filesystems just work. If it does happen
> though, you've instantly got a problem.

That problem being slab_lock in practice btw, though I suppose it could
do the same with any number of others. In encountered case, ksoftirqd
(or sirq-blk) blocks on slab_lock while holding q->queue_lock, while a
userspace task (dbench) blocks on q->queue_lock while holding slab_lock
on the same cpu. Game over.

Odd is that it doesn't seem to materialize if you have rt_mutex deadlock
detector enabled, not that that matters. My 64 core box beat on ext3
for 35 hours without ever hitting it with no deadlock detector (this
time.. other long runs on top thereof, totaling lots of hours), and my
x3550 beat crap out of several fs for a very long day week without
hitting it with deadlock detector, but hits it fairly easily without.

Hohum, regardless of fickle timing gods mood of the moment, deadlocks
are most definitely possible, and will happen, which leaves us with at
least two filesystems needing strategically placed -rt unplug points,
with no guarantee that this is really solving anything at all (other
than empirical evidence that the bad thing ain't happening 'course).

-Mike

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/