Re: Zram writeback feature unstable with heavy swap utilization - BUG: Bad page state in process...

From: Minchan Kim
Date: Thu Jul 26 2018 - 06:30:10 EST


On Thu, Jul 26, 2018 at 12:00:44PM +0200, Tino Lehnig wrote:
> On 07/26/2018 08:10 AM, Tino Lehnig wrote:
> > > A thing I could imagine is
> > > [0bcac06f27d75, skip swapcache for swapin of synchronous device]
> > > It was merged into v4.15. Could you check it by bisecting?
> >
> > Thanks, I will check that.
>
> So I get the same behavior as in v4.15-rc1 after this commit. All prior
> builds are fine.
>
> I have also tested all other 4.15 rc builds now and the symptoms are the
> same through rc8. KVM processes become unresponsive and I see kernel

Yub, I think it's the pooling routine of swap_readpage. With the patch
I mentioned, swap layer will wait the IO synchronoulsy and I belive the
page was in the backing device, not zram memory.

> messages like the one below. This happens with and without the writeback

Huh, you see it without writeback? It's weird. Without writeback feature,
zram operaion is always synchronous on memory compression/decompression
so you shouldn't see below io_schedule logic which happens only for
asynchronous IO operation.
Could you check one more time that it happens without writeback?

> feature being used. The bad page state bug appears very rarely in these
> versions and only when writeback is active.

Yub, I will review code more. I guess there are some places where assumes
anonymous pages are with swapcache so refcount counting would be broken.

>
> Starting with rc9, I only get the same bad page state bug as in all newer
> kernels.

So, you mean you couldn't se bad page state bug until 4.15-rc8?
You just see below hung message until 4.15-rc8, not bad page bug?

>
> --
>
> [ 363.494793] INFO: task kworker/4:2:498 blocked for more than 120 seconds.
> [ 363.494872] Not tainted 4.14.0-zram-pre-rc1 #17
> [ 363.494943] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables
> this message.
> [ 363.495021] kworker/4:2 D 0 498 2 0x80000000
> [ 363.495029] Workqueue: events async_pf_execute
> [ 363.495030] Call Trace:
> [ 363.495037] ? __schedule+0x3bc/0x830
> [ 363.495039] schedule+0x32/0x80
> [ 363.495042] io_schedule+0x12/0x40
> [ 363.495045] __lock_page_or_retry+0x302/0x320
> [ 363.495047] ? page_cache_tree_insert+0xa0/0xa0
> [ 363.495051] do_swap_page+0x4ab/0x860
> [ 363.495054] __handle_mm_fault+0x77b/0x10c0
> [ 363.495056] handle_mm_fault+0xc6/0x1b0
> [ 363.495059] __get_user_pages+0xf9/0x620
> [ 363.495061] ? update_load_avg+0x5d6/0x6d0
> [ 363.495064] get_user_pages_remote+0x137/0x1f0
> [ 363.495067] async_pf_execute+0x62/0x180
> [ 363.495071] process_one_work+0x184/0x380
> [ 363.495073] worker_thread+0x4d/0x3c0
> [ 363.495076] kthread+0xf5/0x130
> [ 363.495078] ? process_one_work+0x380/0x380
> [ 363.495080] ? kthread_create_worker_on_cpu+0x50/0x50
> [ 363.495083] ? do_group_exit+0x3a/0xa0
> [ 363.495086] ret_from_fork+0x1f/0x30
>
> --
> Kind regards,
>
> Tino Lehnig