Re: [PATCH -v5 0/9] migrate_pages(): batch TLB flushing

From: Jan Kara
Date: Mon Feb 27 2023 - 06:06:21 EST


On Fri 17-02-23 13:47:48, Hugh Dickins wrote:
> On Mon, 13 Feb 2023, Huang Ying wrote:
>
> > From: "Huang, Ying" <ying.huang@xxxxxxxxx>
> >
> > Now, migrate_pages() migrate folios one by one, like the fake code as
> > follows,
> >
> > for each folio
> > unmap
> > flush TLB
> > copy
> > restore map
> >
> > If multiple folios are passed to migrate_pages(), there are
> > opportunities to batch the TLB flushing and copying. That is, we can
> > change the code to something as follows,
> >
> > for each folio
> > unmap
> > for each folio
> > flush TLB
> > for each folio
> > copy
> > for each folio
> > restore map
> >
> > The total number of TLB flushing IPI can be reduced considerably. And
> > we may use some hardware accelerator such as DSA to accelerate the
> > folio copying.
> >
> > So in this patch, we refactor the migrate_pages() implementation and
> > implement the TLB flushing batching. Base on this, hardware
> > accelerated folio copying can be implemented.
> >
> > If too many folios are passed to migrate_pages(), in the naive batched
> > implementation, we may unmap too many folios at the same time. The
> > possibility for a task to wait for the migrated folios to be mapped
> > again increases. So the latency may be hurt. To deal with this
> > issue, the max number of folios be unmapped in batch is restricted to
> > no more than HPAGE_PMD_NR in the unit of page. That is, the influence
> > is at the same level of THP migration.
> >
> > We use the following test to measure the performance impact of the
> > patchset,
> >
> > On a 2-socket Intel server,
> >
> > - Run pmbench memory accessing benchmark
> >
> > - Run `migratepages` to migrate pages of pmbench between node 0 and
> > node 1 back and forth.
> >
> > With the patch, the TLB flushing IPI reduces 99.1% during the test and
> > the number of pages migrated successfully per second increases 291.7%.
> >
> > Xin Hao helped to test the patchset on an ARM64 server with 128 cores,
> > 2 NUMA nodes. Test results show that the page migration performance
> > increases up to 78%.
> >
> > This patchset is based on mm-unstable 2023-02-10.
>
> And back in linux-next this week: I tried next-20230217 overnight.
>
> There is a deadlock in this patchset (and in previous versions: sorry
> it's taken me so long to report), but I think one that's easily solved.
>
> I've not bisected to precisely which patch (load can take several hours
> to hit the deadlock), but it doesn't really matter, and I expect that
> you can guess.
>
> My root and home filesystems are ext4 (4kB blocks with 4kB PAGE_SIZE),
> and so is the filesystem I'm testing, ext4 on /dev/loop0 on tmpfs.
> So, plenty of ext4 page cache and buffer_heads.
>
> Again and again, the deadlock is seen with buffer_migrate_folio_norefs(),
> either in kcompactd0 or in khugepaged trying to compact, or in both:
> it ends up calling __lock_buffer(), and that schedules away, waiting
> forever to get BH_lock. I have not identified who is holding BH_lock,
> but I imagine a jbd2 journalling thread, and presume that it wants one
> of the folio locks which migrate_pages_batch() is already holding; or
> maybe it's all more convoluted than that. Other tasks then back up
> waiting on those folio locks held in the batch.
>
> Never a problem with buffer_migrate_folio(), always with the "more
> careful" buffer_migrate_folio_norefs(). And the patch below fixes
> it for me: I've had enough hours with it now, on enough occasions,
> to be confident of that.
>
> Cc'ing Jan Kara, who knows buffer_migrate_folio_norefs() and jbd2
> very well, and I hope can assure us that there is an understandable
> deadlock here, from holding several random folio locks, then trying
> to lock buffers. Cc'ing fsdevel, because there's a risk that mm
> folk think something is safe, when it's not sufficient to cope with
> the diversity of filesystems. I hope nothing more than the below is
> needed (and I've had no other problems with the patchset: good job),
> but cannot be sure.

I suspect it can indeed be caused by the presence of the loop device as
Huang Ying has suggested. What filesystems using buffer_heads do is a
pattern like:

bh = page_buffers(loop device page cache page);
lock_buffer(bh);
submit_bh(bh);
- now on loop dev this ends up doing:
lo_write_bvec()
vfs_iter_write()
...
folio_lock(backing file folio);

So if migration code holds "backing file folio" lock and at the same time
waits for 'bh' lock (while trying to migrate loop device page cache page), it
is a deadlock.

Proposed solution of never waiting for locks in batched mode looks like a
sensible one to me...

Honza


--
Jan Kara <jack@xxxxxxxx>
SUSE Labs, CR