Re: [patch 1/3] mm: protect set_page_dirty() from ongoing truncation

From: alexander . levin
Date: Mon Apr 10 2017 - 11:13:51 EST


On Mon, Apr 10, 2017 at 02:06:38PM +0200, Jan Kara wrote:
> On Mon 10-04-17 02:22:33, alexander.levin@xxxxxxxxxxx wrote:
> > On Fri, Dec 05, 2014 at 09:52:44AM -0500, Johannes Weiner wrote:
> > > Tejun, while reviewing the code, spotted the following race condition
> > > between the dirtying and truncation of a page:
> > >
> > > __set_page_dirty_nobuffers() __delete_from_page_cache()
> > > if (TestSetPageDirty(page))
> > > page->mapping = NULL
> > > if (PageDirty())
> > > dec_zone_page_state(page, NR_FILE_DIRTY);
> > > dec_bdi_stat(mapping->backing_dev_info, BDI_RECLAIMABLE);
> > > if (page->mapping)
> > > account_page_dirtied(page)
> > > __inc_zone_page_state(page, NR_FILE_DIRTY);
> > > __inc_bdi_stat(mapping->backing_dev_info, BDI_RECLAIMABLE);
> > >
> > > which results in an imbalance of NR_FILE_DIRTY and BDI_RECLAIMABLE.
> > >
> > > Dirtiers usually lock out truncation, either by holding the page lock
> > > directly, or in case of zap_pte_range(), by pinning the mapcount with
> > > the page table lock held. The notable exception to this rule, though,
> > > is do_wp_page(), for which this race exists. However, do_wp_page()
> > > already waits for a locked page to unlock before setting the dirty
> > > bit, in order to prevent a race where clear_page_dirty() misses the
> > > page bit in the presence of dirty ptes. Upgrade that wait to a fully
> > > locked set_page_dirty() to also cover the situation explained above.
> > >
> > > Afterwards, the code in set_page_dirty() dealing with a truncation
> > > race is no longer needed. Remove it.
> > >
> > > Reported-by: Tejun Heo <tj@xxxxxxxxxx>
> > > Signed-off-by: Johannes Weiner <hannes@xxxxxxxxxxx>
> > > Cc: <stable@xxxxxxxxxxxxxxx>
> > > Acked-by: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx>
> >
> > Hi Johannes,
> >
> > I'm seeing the following while fuzzing with trinity on linux-next (I've changed
> > the WARN to a VM_BUG_ON_PAGE for some extra page info).
>
> But this looks more like a bug in 9p which allows v9fs_write_end() to dirty
> a !Uptodate page?

I thought that 77469c3f5 ("9p: saner ->write_end() on failing copy into
non-uptodate page") prevented from that happening, but that's actually the
change that's causing it (I ended up misreading it last night).

Will fix it as follows:

diff --git a/fs/9p/vfs_addr.c b/fs/9p/vfs_addr.c
index adaf6f6..be84c0c 100644
--- a/fs/9p/vfs_addr.c
+++ b/fs/9p/vfs_addr.c
@@ -310,9 +310,13 @@ static int v9fs_write_end(struct file *filp, struct address_space *mapping,

p9_debug(P9_DEBUG_VFS, "filp %p, mapping %p\n", filp, mapping);

- if (unlikely(copied < len && !PageUptodate(page))) {
- copied = 0;
- goto out;
+ if (!PageUptodate(page)) {
+ if (unlikely(copied < len)) {
+ copied = 0;
+ goto out;
+ } else {
+ SetPageUptodate(page);
+ }
}
/*
* No need to use i_size_read() here, the i_size

--

Thanks,
Sasha