Re: [PATCH v2 2/3] khugepaged: Optimize collapse_pte_mapped_thp() for large folios by PTE batching

From: Dev Jain
Date: Wed Jun 25 2025 - 23:49:41 EST



On 25/06/25 6:41 pm, Lorenzo Stoakes wrote:
On Wed, Jun 25, 2025 at 11:28:05AM +0530, Dev Jain wrote:
Use PTE batching to optimize collapse_pte_mapped_thp().

On arm64, suppose khugepaged is scanning a pte-mapped 2MB THP for collapse.
Then, calling ptep_clear() for every pte will cause a TLB flush for every
contpte block. Instead, clear_full_ptes() does a
contpte_try_unfold_partial() which will flush the TLB only for the (if any)
starting and ending contpte block, if they partially overlap with the range
khugepaged is looking at.

For all arches, there should be a benefit due to batching atomic operations
on mapcounts due to folio_remove_rmap_ptes().

Note that we do not need to make a change to the check
"if (folio_page(folio, i) != page)"; if i'th page of the folio is equal
to the first page of our batch, then i + 1, .... i + nr_batch_ptes - 1
pages of the folio will be equal to the corresponding pages of our
batch mapping consecutive pages.

No issues were observed with mm-selftests.

Signed-off-by: Dev Jain <dev.jain@xxxxxxx>
---
mm/khugepaged.c | 38 ++++++++++++++++++++++++++------------
1 file changed, 26 insertions(+), 12 deletions(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 3944b112d452..4c8d33abfbd8 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1499,15 +1499,16 @@ static int set_huge_pmd(struct vm_area_struct *vma, unsigned long addr,
int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
bool install_pmd)
{
+ int nr_mapped_ptes = 0, nr_batch_ptes, result = SCAN_FAIL;
struct mmu_notifier_range range;
bool notified = false;
unsigned long haddr = addr & HPAGE_PMD_MASK;
+ unsigned long end = haddr + HPAGE_PMD_SIZE;
struct vm_area_struct *vma = vma_lookup(mm, haddr);
struct folio *folio;
pte_t *start_pte, *pte;
pmd_t *pmd, pgt_pmd;
spinlock_t *pml = NULL, *ptl;
- int nr_ptes = 0, result = SCAN_FAIL;
int i;

mmap_assert_locked(mm);
@@ -1621,11 +1622,17 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
goto abort;

/* step 2: clear page table and adjust rmap */
- for (i = 0, addr = haddr, pte = start_pte;
- i < HPAGE_PMD_NR; i++, addr += PAGE_SIZE, pte++) {
+ for (i = 0, addr = haddr, pte = start_pte; i < HPAGE_PMD_NR;
+ i += nr_batch_ptes, addr += nr_batch_ptes * PAGE_SIZE,
+ pte += nr_batch_ptes) {
+ const fpb_t flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY;
+ int max_nr_batch_ptes = (end - addr) >> PAGE_SHIFT;
+ struct folio *mapped_folio;
struct page *page;
pte_t ptent = ptep_get(pte);

+ nr_batch_ptes = 1;
+
if (pte_none(ptent))
continue;
/*
@@ -1639,26 +1646,33 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
goto abort;
}
page = vm_normal_page(vma, addr, ptent);
+ mapped_folio = page_folio(page);
+
if (folio_page(folio, i) != page)
goto abort;
Isn't this asserting that folio == mapped_folio here? We're saying page is the
ith page of folio, so why do we need to look up mapped_folio?

We need to check for all PTEs whether they map the right page or not. This may
get disturbed due to mremap and stuff.


+ mapped_folio = page_folio(page);
You're assigning this twice.

Forgot to remove, thanks.


+ nr_batch_ptes = folio_pte_batch(mapped_folio, addr, pte, ptent,
+ max_nr_batch_ptes, flags,
+ NULL, NULL, NULL);
+
/*
* Must clear entry, or a racing truncate may re-remove it.
* TLB flush can be left until pmdp_collapse_flush() does it.
* PTE dirty? Shmem page is already dirty; file is read-only.
*/
- ptep_clear(mm, addr, pte);
- folio_remove_rmap_pte(folio, page, vma);
- nr_ptes++;
+ clear_full_ptes(mm, addr, pte, nr_batch_ptes, /* full = */ false);
+ folio_remove_rmap_ptes(folio, page, nr_batch_ptes, vma);
+ nr_mapped_ptes += nr_batch_ptes;
}

if (!pml)
spin_unlock(ptl);

/* step 3: set proper refcount and mm_counters. */
- if (nr_ptes) {
- folio_ref_sub(folio, nr_ptes);
- add_mm_counter(mm, mm_counter_file(folio), -nr_ptes);
+ if (nr_mapped_ptes) {
+ folio_ref_sub(folio, nr_mapped_ptes);
+ add_mm_counter(mm, mm_counter_file(folio), -nr_mapped_ptes);
}

/* step 4: remove empty page table */
@@ -1691,10 +1705,10 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
: SCAN_SUCCEED;
goto drop_folio;
abort:
- if (nr_ptes) {
+ if (nr_mapped_ptes) {
I know it's ironic coming from me :P but I'm not sure why we need to churn this
up by renaming?

Because nr_ptes is an existing variable and I need a new variable to make
the jump at the end of the PTE batch.


flush_tlb_mm(mm);
- folio_ref_sub(folio, nr_ptes);
- add_mm_counter(mm, mm_counter_file(folio), -nr_ptes);
+ folio_ref_sub(folio, nr_mapped_ptes);
+ add_mm_counter(mm, mm_counter_file(folio), -nr_mapped_ptes);
}
unlock:
if (start_pte)
--
2.30.2

V