[PATCH] mm: drop mark_page_access from the unmap path

From: Michal Hocko
Date: Fri Aug 09 2019 - 08:29:59 EST


Minchan has noticed that mark_page_access can take quite some time
during unmap:
: I had a time to benchmark it via adding some trace_printk hooks between
: pte_offset_map_lock and pte_unmap_unlock in zap_pte_range. The testing
: device is 2018 premium mobile device.
:
: I can get 2ms delay rather easily to release 2M(ie, 512 pages) when the
: task runs on little core even though it doesn't have any IPI and LRU
: lock contention. It's already too heavy.
:
: If I remove activate_page, 35-40% overhead of zap_pte_range is gone
: so most of overhead(about 0.7ms) comes from activate_page via
: mark_page_accessed. Thus, if there are LRU contention, that 0.7ms could
: accumulate up to several ms.

bf3f3bc5e734 ("mm: don't mark_page_accessed in fault path") has replaced
SetPageReferenced by mark_page_accessed arguing that the former is not
sufficient when mark_page_accessed is removed from the fault path
because it doesn't promote page to the active list. It is true that a
page that is mapped by a single process might not get promoted even when
referenced if the reclaim checks it after the unmap but does that matter
that much? Can we cosider the page hot if there are no other
users? Moreover we do have workingset detection in place since then and
so a next refault would activate the page if it was really hot one.

Drop the expensive mark_page_accessed and restore SetPageReferenced to
transfer the reference information into the struct page for now to
reduce the unmap overhead. Should we find workloads that noticeably
depend on this behavior we should find a way to make mark_page_accessed
less expensive.

Signed-off-by: Michal Hocko <mhocko@xxxxxxxx>
---
mm/memory.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memory.c b/mm/memory.c
index e2bb51b6242e..ced521df8ee7 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1053,7 +1053,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
}
if (pte_young(ptent) &&
likely(!(vma->vm_flags & VM_SEQ_READ)))
- mark_page_accessed(page);
+ SetPageReferenced(page);
}
rss[mm_counter(page)]--;
page_remove_rmap(page, false);
--
2.20.1

--
Michal Hocko
SUSE Labs