Re: [PATCH 3.16 102/305] xfs: xfs_iflush_cluster fails to abort on error

From: Dave Chinner
Date: Tue Aug 16 2016 - 22:03:13 EST


On Tue, Aug 16, 2016 at 08:45:02PM +0100, Ben Hutchings wrote:
> On Sun, 2016-08-14 at 09:36 +1000, Dave Chinner wrote:
> > On Sat, Aug 13, 2016 at 06:42:51PM +0100, Ben Hutchings wrote:
> > >
> > > 3.16.37-rc1 review patch.  If anyone has any objections, please let me know.
> > >
> > > ------------------
> > >
> > > > > From: Dave Chinner <dchinner@xxxxxxxxxx>
> > >
> > > commit b1438f477934f5a4d5a44df26f3079a7575d5946 upstream.
> > >
> > > When a failure due to an inode buffer occurs, the error handling
> > > fails to abort the inode writeback correctly. This can result in the
> > > inode being reclaimed whilst still in the AIL, leading to
> > > use-after-free situations as well as filesystems that cannot be
> > > unmounted as the inode log items left in the AIL never get removed.
> > >
> > > Fix this by ensuring fatal errors from xfs_imap_to_bp() result in
> > > the inode flush being aborted correctly.
> > ....
> > >
> > >  
> > > > >   /*
> > > > > -  * Get the buffer containing the on-disk inode.
> > > > > +  * Get the buffer containing the on-disk inode. We are doing a try-lock
> > > > > +  * operation here, so we may get  an EAGAIN error. In that case, we
> > > > > +  * simply want to return with the inode still dirty.
> > > > > +  *
> > > > > +  * If we get any other error, we effectively have a corruption situation
> > > > > +  * and we cannot flush the inode, so we treat it the same as failing
> > > > > +  * xfs_iflush_int().
> > > > >    */
> > > > >   error = xfs_imap_to_bp(mp, NULL, &ip->i_imap, &dip, &bp, XBF_TRYLOCK,
> > > > >          0);
> > > > > - if (error || !bp) {
> > > > > + if (error == -EAGAIN) {
> >
> > Wrong. As was pointed out for other -stable trees after users
> > reported regressions, the error signs in XFS changed from positive
> > to negative in 3.17-rc1.
>
> OK, so do I just need to delete the minus sign there?

Yes.

-Dave.
--
Dave Chinner
dchinner@xxxxxxxxxx