Re: Memory hotplug softlock issue

From: Michal Hocko
Date: Wed Nov 14 2018 - 10:00:34 EST


On Wed 14-11-18 22:52:50, Baoquan He wrote:
> On 11/14/18 at 10:01am, Michal Hocko wrote:
> > I have seen an issue when the migration cannot make a forward progress
> > because of a glibc page with a reference count bumping up and down. Most
> > probable explanation is the faultaround code. I am working on this and
> > will post a patch soon. In any case the migration should converge and if
> > it doesn't do then there is a bug lurking somewhere.
> >
> > Failing on ENOMEM is a questionable thing. I haven't seen that happening
> > wildly but if it is a case then I wouldn't be opposed.
>
> Applied your debugging patches, it helps a lot to printing message.
>
> Below is the dmesg log about the migrating failure. It can't pass
> migrate_pages() and loop forever.
>
> [ +0.083841] migrating pfn 10fff7d0 failed
> [ +0.000005] page:ffffea043ffdf400 count:208 mapcount:201 mapping:ffff888dff4bdda8 index:0x2
> [ +0.012689] xfs_address_space_operations [xfs]
> [ +0.000030] name:"stress"
> [ +0.004556] flags: 0x5fffffc0000004(uptodate)
> [ +0.007339] raw: 005fffffc0000004 ffffc900000e3d80 ffffc900000e3d80 ffff888dff4bdda8
> [ +0.009488] raw: 0000000000000002 0000000000000000 000000cb000000c8 ffff888e7353d000
> [ +0.007726] page->mem_cgroup:ffff888e7353d000
> [ +0.084538] migrating pfn 10fff7d0 failed
> [ +0.000006] page:ffffea043ffdf400 count:210 mapcount:201 mapping:ffff888dff4bdda8 index:0x2
> [ +0.012798] xfs_address_space_operations [xfs]
> [ +0.000034] name:"stress"
> [ +0.004524] flags: 0x5fffffc0000004(uptodate)
> [ +0.007068] raw: 005fffffc0000004 ffffc900000e3d80 ffffc900000e3d80 ffff888dff4bdda8
> [ +0.009359] raw: 0000000000000002 0000000000000000 000000cb000000c8 ffff888e7353d000
> [ +0.007728] page->mem_cgroup:ffff888e7353d000

I wouldn't be surprised if this was a similar/same issue I've been
chasing recently. Could you try to disable faultaround to see if that
helps. It seems that it helped in my particular case but I am still
waiting for the final good-to-go to post the patch as I do not own the
workload which triggered that issue.
--
Michal Hocko
SUSE Labs