[PATCH -mm] mm: more likely reclaim MADV_SEQUENTIAL mappings

From: Johannes Weiner
Date: Sat Jul 19 2008 - 13:32:31 EST

File pages accessed only once through sequential-read mappings between
fault and scan time are perfect candidates for reclaim.

This patch makes page_referenced() ignore these singular references and
the pages stay on the inactive list where they likely fall victim to the
next reclaim phase.

Already activated pages are still treated normally. If they were
accessed multiple times and therefor promoted to the active list, we
probably want to keep them.

Benchmarks show that big (relative to the system's memory)
MADV_SEQUENTIAL mappings read sequentially cause much less kernel
activity. Especially less LRU moving-around because we never activate
read-once pages in the first place just to demote them again.

And leaving these perfect reclaim candidates on the inactive list makes
it more likely for the real working set to survive the next reclaim

Signed-off-by: Johannes Weiner <hannes@xxxxxxxxxxxx>
Signed-off-by: Rik van Riel <riel@xxxxxxxxxx>
mm/rmap.c | 20 +++++++++++++++-----
1 file changed, 15 insertions(+), 5 deletions(-)

Benchmark graphs and the test-application can be found here:


Patch is against -mm, although only tested on good ol' linus-tree as
-mmotm wouldn't compile at the moment.

--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -333,8 +333,18 @@ static int page_referenced_one(struct pa
goto out_unmap;

- if (ptep_clear_flush_young_notify(vma, address, pte))
- referenced++;
+ if (ptep_clear_flush_young_notify(vma, address, pte)) {
+ /*
+ * If there was just one sequential access to the
+ * page, ignore it. Otherwise, mark_page_accessed()
+ * will have promoted the page to the active list and
+ * it should be kept.
+ */
+ if (VM_SequentialReadHint(vma) && !PageActive(page))
+ ClearPageReferenced(page);
+ else
+ referenced++;
+ }

/* Pretend the page is referenced if the task has the
swap token and is in the middle of a page fault. */
@@ -455,9 +465,6 @@ int page_referenced(struct page *page, i
int referenced = 0;

- if (TestClearPageReferenced(page))
- referenced++;
if (page_mapped(page) && page->mapping) {
if (PageAnon(page))
referenced += page_referenced_anon(page, mem_cont);
@@ -473,6 +480,9 @@ int page_referenced(struct page *page, i

+ if (TestClearPageReferenced(page))
+ referenced++;
if (page_test_and_clear_young(page))

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/