Re: [PATCH v2] mm/vmscan: count layzfree pages and fix nr_isolated_* mismatch

From: Johannes Weiner
Date: Wed Apr 22 2020 - 09:07:57 EST


On Wed, Apr 22, 2020 at 05:48:15PM +0900, Jaewon Kim wrote:
> @@ -1295,11 +1295,15 @@ static unsigned long shrink_page_list(struct list_head *page_list,
> */
> if (page_mapped(page)) {
> enum ttu_flags flags = ttu_flags | TTU_BATCH_FLUSH;
> + bool lazyfree = PageAnon(page) && !PageSwapBacked(page);
>
> if (unlikely(PageTransHuge(page)))
> flags |= TTU_SPLIT_HUGE_PMD;
> +
> if (!try_to_unmap(page, flags)) {
> stat->nr_unmap_fail += nr_pages;
> + if (lazyfree && PageSwapBacked(page))

This looks pretty strange, until you remember that try_to_unmap()
could SetPageSwapbacked again.

This might be more obvious?

was_swapbacked = PageSwapBacked(page);
if (!try_to_unmap(page, flags)) {
stat->nr_unmap_fail += nr_pages;
if (!was_swapbacked && PageSwapBacked(page))
> + stat->nr_lazyfree_fail += nr_pages;
> goto activate_locked;

Or at least was_lazyfree.

> @@ -1491,8 +1495,8 @@ unsigned long reclaim_clean_pages_from_list(struct zone *zone,
> .priority = DEF_PRIORITY,
> .may_unmap = 1,
> };
> - struct reclaim_stat dummy_stat;
> - unsigned long ret;
> + struct reclaim_stat stat;
> + unsigned long reclaimed;

nr_reclaimed would be better.

I also prefer keeping dummy_stat, since that's still what it is.