[PATCH] mm: support large mapping building for tmpfs
From: Baolin Wang
Date: Tue Jul 01 2025 - 04:42:04 EST
After commit acd7ccb284b8 ("mm: shmem: add large folio support for tmpfs"),
tmpfs can also support large folio allocation (not just PMD-sized large
folios).
However, when accessing tmpfs via mmap(), although tmpfs supports large folios,
we still establish mappings at the base page granularity, which is unreasonable.
We can establish large mappings according to the size of the large folio. On one
hand, this can reduce the overhead of page faults; on the other hand, it can
leverage hardware architecture optimizations to reduce TLB misses, such as
contiguous PTEs on the ARM architecture.
Moreover, since the user has already added the 'huge=' option when mounting tmpfs
to allow for large folio allocation, establishing large folios' mapping is expected
and will not surprise users by inflating the RSS of the process.
In order to support large mappings for tmpfs, besides checking VMA limits and
PMD pagetable limits, it is also necessary to check if the linear page offset
of the VMA is order-aligned within the file.
Performance test:
I created a 1G tmpfs file, populated with 64K large folios, and accessed it
sequentially via mmap(). I observed a significant performance improvement:
Before the patch:
real 0m0.214s
user 0m0.012s
sys 0m0.203s
After the patch:
real 0m0.025s
user 0m0.000s
sys 0m0.024s
Signed-off-by: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx>
---
mm/memory.c | 13 +++++++++----
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index 0f9b32a20e5b..6385a9385a9b 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5383,10 +5383,10 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
/*
* Using per-page fault to maintain the uffd semantics, and same
- * approach also applies to non-anonymous-shmem faults to avoid
+ * approach also applies to non shmem/tmpfs faults to avoid
* inflating the RSS of the process.
*/
- if (!vma_is_anon_shmem(vma) || unlikely(userfaultfd_armed(vma)) ||
+ if (!vma_is_shmem(vma) || unlikely(userfaultfd_armed(vma)) ||
unlikely(needs_fallback)) {
nr_pages = 1;
} else if (nr_pages > 1) {
@@ -5395,15 +5395,20 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
pgoff_t vma_off = vmf->pgoff - vmf->vma->vm_pgoff;
/* The index of the entry in the pagetable for fault page. */
pgoff_t pte_off = pte_index(vmf->address);
+ unsigned long hpage_size = PAGE_SIZE << folio_order(folio);
/*
* Fallback to per-page fault in case the folio size in page
- * cache beyond the VMA limits and PMD pagetable limits.
+ * cache beyond the VMA limits or PMD pagetable limits. And
+ * also check if the linear page offset of vma is order-aligned
+ * within the file for tmpfs.
*/
if (unlikely(vma_off < idx ||
vma_off + (nr_pages - idx) > vma_pages(vma) ||
pte_off < idx ||
- pte_off + (nr_pages - idx) > PTRS_PER_PTE)) {
+ pte_off + (nr_pages - idx) > PTRS_PER_PTE) ||
+ !IS_ALIGNED((vma->vm_start >> PAGE_SHIFT) - vma->vm_pgoff,
+ hpage_size >> PAGE_SHIFT)) {
nr_pages = 1;
} else {
/* Now we can set mappings for the whole large folio. */
--
2.43.5