Re: [PATCH v2] vmscan: fix increasing nr_isolated incurred by putback unevictable pages

From: Vlastimil Babka
Date: Thu Aug 06 2015 - 08:21:47 EST


On 08/05/2015 02:52 AM, Jaewon Kim wrote:


On 2015ë 08ì 05ì 08:31, Minchan Kim wrote:
Hello,

On Tue, Aug 04, 2015 at 03:09:37PM -0700, Andrew Morton wrote:
On Tue, 04 Aug 2015 19:40:08 +0900 Jaewon Kim <jaewon31.kim@xxxxxxxxxxx> wrote:

reclaim_clean_pages_from_list() assumes that shrink_page_list() returns
number of pages removed from the candidate list. But shrink_page_list()
puts back mlocked pages without passing it to caller and without
counting as nr_reclaimed. This incurrs increasing nr_isolated.
To fix this, this patch changes shrink_page_list() to pass unevictable
pages back to caller. Caller will take care those pages.

..

--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1157,7 +1157,7 @@ cull_mlocked:
if (PageSwapCache(page))
try_to_free_swap(page);
unlock_page(page);
- putback_lru_page(page);
+ list_add(&page->lru, &ret_pages);
continue;

activate_locked:

Is this going to cause a whole bunch of mlocked pages to be migrated
whereas in current kernels they stay where they are?

The only user that will see the change wrt migration is __alloc_contig_migrate_range() which is explicit about isolating mlocked page for migration (isolate_migratepages_range() calls isolate_migratepages_block() with ISOLATE_UNEVICTABLE). So this will make the migration work for clean page cache too.



It fixes two issues.

1. With unevictable page, cma_alloc will be successful.

Exactly speaking, cma_alloc of current kernel will fail due to unevictable pages.

2. fix leaking of NR_ISOLATED counter of vmstat

With it, too_many_isolated works. Otherwise, it could make hang until
the process get SIGKILL.

This should be more explicit in the changelog. The first issue is not mentioned at all. The second is not clear from the description.


So, I think it's stable material.

Acked-by: Minchan Kim <minchan@xxxxxxxxxx>

Acked-by: Vlastimil Babka <vbabka@xxxxxxx>



Hello

Traditional shrink_inactive_list will put back the unevictable pages as it does through putback_inactive_pages.
However as Minchan Kim said, cma_alloc will be more successful by migrating unevictable pages.
In current kernel, I think, cma_alloc is already trying to migrate unevictable pages except clean page cache.
This patch will allow clean page cache also to be migrated in cma_alloc.

Thank you

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxxx For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/