Re: [PATCH v3 2/3] khugepaged: Optimize __collapse_huge_page_copy_succeeded() by PTE batching

From: Dev Jain
Date: Wed Jul 23 2025 - 00:05:31 EST



On 22/07/25 9:33 pm, David Hildenbrand wrote:
On 22.07.25 17:05, Dev Jain wrote:
Use PTE batching to optimize __collapse_huge_page_copy_succeeded().

On arm64, suppose khugepaged is scanning a pte-mapped 2MB THP for collapse.
Then, calling ptep_clear() for every pte will cause a TLB flush for every
contpte block. Instead, clear_ptes() does a contpte_try_unfold_partial()
which will flush the TLB only for the (if any) starting and ending contpte
block, if they partially overlap with the range khugepaged is looking at.

I suggest not talking so much about arm specifics.

Simply say that batching reduced the number of TLB flushes, especially on architectures that support cont-pte optimizations.
Makes sense.


For all arches, there should be a benefit due to batching atomic operations
on mapcounts due to folio_remove_rmap_ptes() and saving some calls.

I would rephrase that to "Independent of that, batching PTE unmapping has known performance benfits (i.e., less refcount and mapcount updates)".
Thanks.


Signed-off-by: Dev Jain <dev.jain@xxxxxxx>
---
  mm/khugepaged.c | 25 ++++++++++++++++++-------
  1 file changed, 18 insertions(+), 7 deletions(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index a55fb1dcd224..63517ef7eafb 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -700,12 +700,15 @@ static void __collapse_huge_page_copy_succeeded(pte_t *pte,
                          spinlock_t *ptl,
                          struct list_head *compound_pagelist)
  {
+    unsigned long end = address + HPAGE_PMD_SIZE;
      struct folio *src, *tmp;
-    pte_t *_pte;
      pte_t pteval;
+    pte_t *_pte;
+    int nr_ptes;

Nit: I guess we should switch to "unsigned int" here now for consistency with folio_pte_batch().
Okay.

  -    for (_pte = pte; _pte < pte + HPAGE_PMD_NR;
-         _pte++, address += PAGE_SIZE) {
+    for (_pte = pte; _pte < pte + HPAGE_PMD_NR; _pte += nr_ptes,
+         address += nr_ptes * PAGE_SIZE) {
+        nr_ptes = 1;
          pteval = ptep_get(_pte);
          if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) {
              add_mm_counter(vma->vm_mm, MM_ANONPAGES, 1);
@@ -722,18 +725,26 @@ static void __collapse_huge_page_copy_succeeded(pte_t *pte,
              struct page *src_page = pte_page(pteval);
                src = page_folio(src_page);
-            if (!folio_test_large(src))
+
+            if (folio_test_large(src)) {
+                int max_nr_ptes = (end - address) >> PAGE_SHIFT;

Dito.

+
+                nr_ptes = folio_pte_batch(src, _pte, pteval, max_nr_ptes);
+            } else {
                  release_pte_folio(src);
+            }
+

Acked-by: David Hildenbrand <david@xxxxxxxxxx>
Thanks.