Re: [PATCH V4 4/6] mm: reclaim MADV_FREE pages

From: Johannes Weiner
Date: Thu Feb 23 2017 - 11:19:55 EST


On Wed, Feb 22, 2017 at 10:50:42AM -0800, Shaohua Li wrote:
> @@ -1424,6 +1424,12 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
> dec_mm_counter(mm, MM_ANONPAGES);
> rp->lazyfreed++;
> goto discard;
> + } else if (!PageSwapBacked(page)) {
> + /* dirty MADV_FREE page */
> + set_pte_at(mm, address, pvmw.pte, pteval);
> + ret = SWAP_DIRTY;
> + page_vma_mapped_walk_done(&pvmw);
> + break;
> }
>
> if (swap_duplicate(entry) < 0) {
> @@ -1525,8 +1531,8 @@ int try_to_unmap(struct page *page, enum ttu_flags flags)
>
> if (ret != SWAP_MLOCK && !page_mapcount(page)) {
> ret = SWAP_SUCCESS;
> - if (rp.lazyfreed && !PageDirty(page))
> - ret = SWAP_LZFREE;
> + if (rp.lazyfreed && PageDirty(page))
> + ret = SWAP_DIRTY;

Can this actually happen? If the page is dirty, ret should already be
SWAP_DIRTY, right? How would a dirty page get fully unmapped?

It seems to me rp.lazyfreed can be removed entirely now that we don't
have to identify the lazyfree case anymore. The failure case is much
easier to identify - all it takes is a single pte to be dirty.

> @@ -1118,8 +1120,10 @@ static unsigned long shrink_page_list(struct list_head *page_list,
> /*
> * Anonymous process memory has backing store?
> * Try to allocate it some swap space here.
> + * Lazyfree page could be freed directly
> */
> - if (PageAnon(page) && !PageSwapCache(page)) {
> + if (PageAnon(page) && !PageSwapCache(page) &&
> + PageSwapBacked(page)) {

Nit: I'd do PageAnon(page) && PageSwapBacked(page) && !PageSwapCache()
since anon && swapbacked together describe the page type and swapcache
the state. Plus, anon && swapbacked go together everywhere else.

Otherwise, looks very straight-forward!

Thanks