Re: [PATCH v2 2/2] mm: Optimize mremap() by PTE batching

From: Anshuman Khandual
Date: Thu May 08 2025 - 02:31:28 EST




On 5/8/25 07:30, Zi Yan wrote:
> On 7 May 2025, at 2:02, Dev Jain wrote:
>
>> To use PTE batching, we want to determine whether the folio mapped by
>> the PTE is large, thus requiring the use of vm_normal_folio(). We want
>> to avoid the cost of vm_normal_folio() if the code path doesn't already
>> require the folio. For arm64, pte_batch_hint() does the job. To generalize
>> this hint, add a helper which will determine whether two consecutive PTEs
>> point to consecutive PFNs, in which case there is a high probability that
>> the underlying folio is large.
>> Next, use folio_pte_batch() to optimize move_ptes(). On arm64, if the ptes
>> are painted with the contig bit, then ptep_get() will iterate through all 16
>> entries to collect a/d bits. Hence this optimization will result in a 16x
>> reduction in the number of ptep_get() calls. Next, ptep_get_and_clear()
>> will eventually call contpte_try_unfold() on every contig block, thus
>> flushing the TLB for the complete large folio range. Instead, use
>> get_and_clear_full_ptes() so as to elide TLBIs on each contig block, and only
>> do them on the starting and ending contig block.
>>
>> Signed-off-by: Dev Jain <dev.jain@xxxxxxx>
>> ---
>> include/linux/pgtable.h | 29 +++++++++++++++++++++++++++++
>> mm/mremap.c | 37 ++++++++++++++++++++++++++++++-------
>> 2 files changed, 59 insertions(+), 7 deletions(-)
>>
>> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
>> index b50447ef1c92..38dab1f562ed 100644
>> --- a/include/linux/pgtable.h
>> +++ b/include/linux/pgtable.h
>> @@ -369,6 +369,35 @@ static inline pgd_t pgdp_get(pgd_t *pgdp)
>> }
>> #endif
>>
>> +/**
>> + * maybe_contiguous_pte_pfns - Hint whether the page mapped by the pte belongs
>> + * to a large folio.
>> + * @ptep: Pointer to the page table entry.
>> + * @pte: The page table entry.
>> + *
>> + * This helper is invoked when the caller wants to batch over a set of ptes
>> + * mapping a large folio, but the concerned code path does not already have
>> + * the folio. We want to avoid the cost of vm_normal_folio() only to find that
>> + * the underlying folio was small; i.e keep the small folio case as fast as
>> + * possible.
>> + *
>> + * The caller must ensure that ptep + 1 exists.
>
> ptep points to an entry in a PTE page. As long as it is not pointing
> to the last entry, ptep+1 should always exist. With PTRS_PER_PTE and
> sizeof(pte_t), you can check ptep address to figure out whether it
> is the last entry of a PTE page, right? Let me know if I misunderstand
> anything.

Agreed, this not-the-last-pte-entry test is definitely required here just
to prevent a potential unmapped access crash. But I do agree with Andrew
that unless there are callers, this should be contained in the call site
itself (mm/mremap.c) with a good explanation.