Re: [PATCH] mm: avoid blocking lock_page() in kcompactd

From: Michal Hocko
Date: Tue Jan 21 2020 - 04:00:54 EST


On Mon 20-01-20 14:48:05, Cong Wang wrote:
> Hi, Michal
>
> On Thu, Jan 9, 2020 at 11:38 PM Michal Hocko <mhocko@xxxxxxxxxx> wrote:
> >
> > [CC Mel and Vlastimil]
> >
> > On Thu 09-01-20 14:56:46, Cong Wang wrote:
> > > We observed kcompactd hung at __lock_page():
> > >
> > > INFO: task kcompactd0:57 blocked for more than 120 seconds.
> > > Not tainted 4.19.56.x86_64 #1
> > > "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> > > kcompactd0 D 0 57 2 0x80000000
> > > Call Trace:
> > > ? __schedule+0x236/0x860
> > > schedule+0x28/0x80
> > > io_schedule+0x12/0x40
> > > __lock_page+0xf9/0x120
> > > ? page_cache_tree_insert+0xb0/0xb0
> > > ? update_pageblock_skip+0xb0/0xb0
> > > migrate_pages+0x88c/0xb90
> > > ? isolate_freepages_block+0x3b0/0x3b0
> > > compact_zone+0x5f1/0x870
> > > kcompactd_do_work+0x130/0x2c0
> > > ? __switch_to_asm+0x35/0x70
> > > ? __switch_to_asm+0x41/0x70
> > > ? kcompactd_do_work+0x2c0/0x2c0
> > > ? kcompactd+0x73/0x180
> > > kcompactd+0x73/0x180
> > > ? finish_wait+0x80/0x80
> > > kthread+0x113/0x130
> > > ? kthread_create_worker_on_cpu+0x50/0x50
> > > ret_from_fork+0x35/0x40
> > >
> > > which faddr2line maps to:
> > >
> > > migrate_pages+0x88c/0xb90:
> > > lock_page at include/linux/pagemap.h:483
> > > (inlined by) __unmap_and_move at mm/migrate.c:1024
> > > (inlined by) unmap_and_move at mm/migrate.c:1189
> > > (inlined by) migrate_pages at mm/migrate.c:1419
> > >
> > > Sometimes kcompactd eventually got out of this situation, sometimes not.
> >
> > What does this mean exactly? Who is holding the page lock?
>
> As I explained in other email, I didn't locate the process holding the page
> lock before I sent out this patch, as I was fooled by /proc/X/stack.
>
> But now I got its stack trace with `perf`:
>
> ffffffffa722aa06 shrink_inactive_list
> ffffffffa722b3d7 shrink_node_memcg
> ffffffffa722b85f shrink_node
> ffffffffa722bc89 do_try_to_free_pages
> ffffffffa722c179 try_to_free_mem_cgroup_pages
> ffffffffa7298703 try_charge
> ffffffffa729a886 mem_cgroup_try_charge
> ffffffffa720ec03 __add_to_page_cache_locked
> ffffffffa720ee3a add_to_page_cache_lru
> ffffffffa7312ddb iomap_readpages_actor
> ffffffffa73133f7 iomap_apply
> ffffffffa73135da iomap_readpages
> ffffffffa722062e read_pages
> ffffffffa7220b3f __do_page_cache_readahead
> ffffffffa7210554 filemap_fault
> ffffffffc039e41f __xfs_filemap_fault
> ffffffffa724f5e7 __do_fault
> ffffffffa724c5f2 __handle_mm_fault
> ffffffffa724cbc6 handle_mm_fault
> ffffffffa70a313e __do_page_fault
> ffffffffa7a00dfe page_fault
>
> It got stuck somewhere along the call path of mem_cgroup_try_charge(),
> and the trace events of mm_vmscan_lru_shrink_inactive() indicates this
> too:

So it seems that you are condending on the page lock. It is really
unexpected that the reclaim would take that long though. Please try to
enable more vmscan tracepoints to see where the time is spent.

Thanks!
--
Michal Hocko
SUSE Labs