Re: [PATCH v1 3/4] mm: split folio_pte_batch() into folio_pte_batch() and folio_pte_batch_ext()

From: David Hildenbrand
Date: Mon Jun 30 2025 - 05:19:30 EST


On 27.06.25 20:48, Lorenzo Stoakes wrote:
On Fri, Jun 27, 2025 at 01:55:09PM +0200, David Hildenbrand wrote:
Many users (including upcoming ones) don't really need the flags etc,
and can live with a function call.

So let's provide a basic, non-inlined folio_pte_batch().

Hm, but why non-inlined, when it invokes an inlined function? Seems odd no?

We want to always generate a function that uses as little runtime checks as possible. Essentially, optimize out the "flags" as much as possible.

In case of folio_pte_batch(), where we won't use any flags, any checks will be optimized out by the compiler.

So we get a single, specialized, non-inlined function.



In zap_present_ptes(), where we care about performance, the compiler
already seem to generate a call to a common inlined folio_pte_batch()
variant, shared with fork() code. So calling the new non-inlined variant
should not make a difference.

While at it, drop the "addr" parameter that is unused.

Signed-off-by: David Hildenbrand <david@xxxxxxxxxx>

Other than the query above + nit on name below, this is really nice!

---
mm/internal.h | 11 ++++++++---
mm/madvise.c | 4 ++--
mm/memory.c | 6 ++----
mm/mempolicy.c | 3 +--
mm/mlock.c | 3 +--
mm/mremap.c | 3 +--
mm/rmap.c | 3 +--
mm/util.c | 29 +++++++++++++++++++++++++++++
8 files changed, 45 insertions(+), 17 deletions(-)

diff --git a/mm/internal.h b/mm/internal.h
index ca6590c6d9eab..6000b683f68ee 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -218,9 +218,8 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags)
}

/**
- * folio_pte_batch - detect a PTE batch for a large folio
+ * folio_pte_batch_ext - detect a PTE batch for a large folio
* @folio: The large folio to detect a PTE batch for.
- * @addr: The user virtual address the first page is mapped at.
* @ptep: Page table pointer for the first entry.
* @pte: Page table entry for the first page.
* @max_nr: The maximum number of table entries to consider.
@@ -243,9 +242,12 @@ static inline pte_t __pte_batch_clear_ignored(pte_t pte, fpb_t flags)
* must be limited by the caller so scanning cannot exceed a single VMA and
* a single page table.
*
+ * This function will be inlined to optimize based on the input parameters;
+ * consider using folio_pte_batch() instead if applicable.
+ *
* Return: the number of table entries in the batch.
*/
-static inline unsigned int folio_pte_batch(struct folio *folio, unsigned long addr,
+static inline unsigned int folio_pte_batch_ext(struct folio *folio,
pte_t *ptep, pte_t pte, unsigned int max_nr, fpb_t flags,
bool *any_writable, bool *any_young, bool *any_dirty)

Sorry this is really really annoying feedback :P but _ext() makes me think of
page_ext and ugh :))

Wonder if __folio_pte_batch() is better?

This is obviously, not a big deal (TM)

Obviously, I had that as part of the development, and decided against it at some point. :)

Yeah, _ext() is not common in MM yet, in contrast to other subsystems. The only user is indeed page_ext. On arm we seem to have set_pte_ext(). But it's really "page_ext", that's the problematic part, not "_ext" :P

No strong opinion, but I tend to dislike here "__", because often it means "internal helper you're not supposed to used", which isn't really the case here.

E.g.,

alloc_frozen_pages() -> alloc_frozen_pages_noprof() -> __alloc_frozen_pages_noprof()

--
Cheers,

David / dhildenb