[RFC PATCH 06/14] mm/rmap: avoid flushing on page_vma_mkclean_one() when possible

From: Nadav Amit
Date: Mon Jul 18 2022 - 15:37:27 EST


From: Nadav Amit <namit@xxxxxxxxxx>

x86 is capable to avoid TLB flush on clean writable entries.
page_vma_mkclean_one() does not take advantage of this behavior. Adapt
it.

Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx>
Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
Cc: Andy Lutomirski <luto@xxxxxxxxxx>
Cc: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx>
Cc: David Hildenbrand <david@xxxxxxxxxx>
Cc: Peter Xu <peterx@xxxxxxxxxx>
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Cc: Will Deacon <will@xxxxxxxxxx>
Cc: Yu Zhao <yuzhao@xxxxxxxxxx>
Cc: Nick Piggin <npiggin@xxxxxxxxx>
Signed-off-by: Nadav Amit <namit@xxxxxxxxxx>
---
mm/rmap.c | 16 ++++++++++++----
1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index 83172ee0ea35..23997c387858 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -961,17 +961,25 @@ static int page_vma_mkclean_one(struct page_vma_mapped_walk *pvmw)

address = pvmw->address;
if (pvmw->pte) {
- pte_t entry;
+ pte_t entry, oldpte;
pte_t *pte = pvmw->pte;

if (!pte_dirty(*pte) && !pte_write(*pte))
continue;

flush_cache_page(vma, address, pte_pfn(*pte));
- entry = ptep_clear_flush(vma, address, pte);
- entry = pte_wrprotect(entry);
+ oldpte = ptep_modify_prot_start(pvmw->vma, address,
+ pte);
+
+ entry = pte_wrprotect(oldpte);
entry = pte_mkclean(entry);
- set_pte_at(vma->vm_mm, address, pte, entry);
+
+ if (pte_needs_flush(oldpte, entry) ||
+ mm_tlb_flush_pending(vma->vm_mm))
+ flush_tlb_page(vma, address);
+
+ ptep_modify_prot_commit(vma, address, pte, oldpte,
+ entry);
ret = 1;
} else {
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
--
2.25.1