Re: [PATCH v2 6/9] mm: set mapping error when launder_pages fails

From: Jeff Layton
Date: Wed Mar 08 2017 - 13:47:38 EST


On Wed, 2017-03-08 at 18:01 +0000, Trond Myklebust wrote:
> On Wed, 2017-03-08 at 11:29 -0500, Jeff Layton wrote:
> > If launder_page fails, then we hit a problem writing back some inode
> > data. Ensure that we communicate that fact in a subsequent fsync
> > since
> > another task could still have it open for write.
> >
> > Signed-off-by: Jeff Layton <jlayton@xxxxxxxxxx>
> > ---
> > Âmm/truncate.c | 6 +++++-
> > Â1 file changed, 5 insertions(+), 1 deletion(-)
> >
> > diff --git a/mm/truncate.c b/mm/truncate.c
> > index 6263affdef88..29ae420a5bf9 100644
> > --- a/mm/truncate.c
> > +++ b/mm/truncate.c
> > @@ -594,11 +594,15 @@ invalidate_complete_page2(struct address_space
> > *mapping, struct page *page)
> > Â
> > Âstatic int do_launder_page(struct address_space *mapping, struct
> > page *page)
> > Â{
> > + int ret;
> > +
> > Â if (!PageDirty(page))
> > Â return 0;
> > Â if (page->mapping != mapping || mapping->a_ops->launder_page
> > == NULL)
> > Â return 0;
> > - return mapping->a_ops->launder_page(page);
> > + ret = mapping->a_ops->launder_page(page);
> > + mapping_set_error(mapping, ret);
> > + return ret;
> > Â}
> > Â
> > Â/**
>
> No. At that layer, you don't know that this is a page error. In the NFS
> case, it could, for instance, just as well be a fatal signal.
>

Ok...don't we have the same problem with writepage then? Most of the
writepage callers will set an error in the mapping if writepage returns
any sort of error? A fatal signal in that codepath could cause the same
problem, it seems. We don't dip into direct reclaim so much anymore, so
maybe signals aren't an issue there?

The alternative here would be to push this down into the callers. I
worry a bit though about getting this right across filesystems though.
It'd be preferable it if we could keep the mapping_set_error call in
generic VFS code instead, but if not then I'll just plan to do that.

Thanks,
--
Jeff Layton <jlayton@xxxxxxxxxx>