Re: block layer softlockup

From: Dave Jones
Date: Tue Jul 02 2013 - 02:02:13 EST


On Tue, Jul 02, 2013 at 12:07:41PM +1000, Dave Chinner wrote:
> On Mon, Jul 01, 2013 at 01:57:34PM -0400, Dave Jones wrote:
> > On Fri, Jun 28, 2013 at 01:54:37PM +1000, Dave Chinner wrote:
> > > On Thu, Jun 27, 2013 at 04:54:53PM -1000, Linus Torvalds wrote:
> > > > On Thu, Jun 27, 2013 at 3:18 PM, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
> > > > >
> > > > > Right, that will be what is happening - the entire system will go
> > > > > unresponsive when a sync call happens, so it's entirely possible
> > > > > to see the soft lockups on inode_sb_list_add()/inode_sb_list_del()
> > > > > trying to get the lock because of the way ticket spinlocks work...
> > > >
> > > > So what made it all start happening now? I don't recall us having had
> > > > these kinds of issues before..
> > >
> > > Not sure - it's a sudden surprise for me, too. Then again, I haven't
> > > been looking at sync from a performance or lock contention point of
> > > view any time recently. The algorithm that wait_sb_inodes() is
> > > effectively unchanged since at least 2009, so it's probably a case
> > > of it having been protected from contention by some external factor
> > > we've fixed/removed recently. Perhaps the bdi-flusher thread
> > > replacement in -rc1 has changed the timing sufficiently that it no
> > > longer serialises concurrent sync calls as much....
> >
> > This mornings new trace reminded me of this last sentence. Related ?
>
> Was this running the last patch I posted, or a vanilla kernel?

yeah, this had v2 of your patch (the one post lockdep warnings)

> That's doing IO completion processing in softirq time, and the lock
> it just dropped was the q->queue_lock. But that lock is held over
> end IO processing, so it is possible that the way the page writeback
> transition handling of my POC patch caused this.
>
> FWIW, I've attached a simple patch you might like to try to see if
> it *minimises* the inode_sb_list_lock contention problems. All it
> does is try to prevent concurrent entry in wait_sb_inodes() for a
> given superblock and hence only have one walker on the contending
> filesystem at a time. Replace the previous one I sent with it. If
> that doesn't work, I have another simple patch that makes the
> inode_sb_list_lock per-sb to take this isolation even further....

I can try it, though as always, proving a negative....

Dave
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/