[PATCH v2 0/9] mm: vm_normal_page*() improvements

From: David Hildenbrand
Date: Thu Jul 17 2025 - 07:52:33 EST


Based on mm/mm-new from today that contains [2].

Cleanup and unify vm_normal_page_*() handling, also marking the
huge zerofolio as special in the PMD. Add+use vm_normal_page_pud() and
cleanup that XEN vm_ops->find_special_page thingy.

There are plans of using vm_normal_page_*() more widely soon.

Briefly tested on UML (making sure vm_normal_page() still works as expected
without pte_special() support) and on x86-64 with a bunch of tests.
Cross-compiled for a variety of weird archs.

[1] https://lkml.kernel.org/r/20250617154345.2494405-1-david@xxxxxxxxxx
[2] https://lkml.kernel.org/r/cover.1752499009.git.luizcap@xxxxxxxxxx

v1 -> v2:
* "mm/memory: convert print_bad_pte() to print_bad_page_map()"
-> Don't use pgdp_get(), because it's broken on some arm configs
-> Extend patch description
-> Don't use pmd_val(pmdp_get()), because that doesn't work on some
m68k configs
* Added RBs

RFC -> v1:
* Dropped the highest_memmap_pfn removal stuff and instead added
"mm/memory: convert print_bad_pte() to print_bad_page_map()"
* Dropped "mm: compare pfns only if the entry is present when inserting
pfns/pages" for now, will probably clean that up separately.
* Dropped "mm: remove "horrible special case to handle copy-on-write
behaviour"", and "mm: drop addr parameter from vm_normal_*_pmd()" will
require more thought
* "mm/huge_memory: support huge zero folio in vmf_insert_folio_pmd()"
-> Extend patch description.
* "fs/dax: use vmf_insert_folio_pmd() to insert the huge zero folio"
-> Extend patch description.
* "mm/huge_memory: mark PMD mappings of the huge zero folio special"
-> Remove comment from vm_normal_page_pmd().
* "mm/memory: factor out common code from vm_normal_page_*()"
-> Adjust to print_bad_page_map()/highest_memmap_pfn changes.
-> Add proper kernel doc to all involved functions
* "mm: introduce and use vm_normal_page_pud()"
-> Adjust to print_bad_page_map() changes.

Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
Cc: Juergen Gross <jgross@xxxxxxxx>
Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>
Cc: Dan Williams <dan.j.williams@xxxxxxxxx>
Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx>
Cc: Jan Kara <jack@xxxxxxx>
Cc: Alexander Viro <viro@xxxxxxxxxxxxxxxxxx>
Cc: Christian Brauner <brauner@xxxxxxxxxx>
Cc: Lorenzo Stoakes <lorenzo.stoakes@xxxxxxxxxx>
Cc: "Liam R. Howlett" <Liam.Howlett@xxxxxxxxxx>
Cc: Vlastimil Babka <vbabka@xxxxxxx>
Cc: Mike Rapoport <rppt@xxxxxxxxxx>
Cc: Suren Baghdasaryan <surenb@xxxxxxxxxx>
Cc: Michal Hocko <mhocko@xxxxxxxx>
Cc: Zi Yan <ziy@xxxxxxxxxx>
Cc: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx>
Cc: Nico Pache <npache@xxxxxxxxxx>
Cc: Ryan Roberts <ryan.roberts@xxxxxxx>
Cc: Dev Jain <dev.jain@xxxxxxx>
Cc: Barry Song <baohua@xxxxxxxxxx>
Cc: Jann Horn <jannh@xxxxxxxxxx>
Cc: Pedro Falcato <pfalcato@xxxxxxx>
Cc: Hugh Dickins <hughd@xxxxxxxxxx>
Cc: Oscar Salvador <osalvador@xxxxxxx>
Cc: Lance Yang <lance.yang@xxxxxxxxx>

David Hildenbrand (9):
mm/huge_memory: move more common code into insert_pmd()
mm/huge_memory: move more common code into insert_pud()
mm/huge_memory: support huge zero folio in vmf_insert_folio_pmd()
fs/dax: use vmf_insert_folio_pmd() to insert the huge zero folio
mm/huge_memory: mark PMD mappings of the huge zero folio special
mm/memory: convert print_bad_pte() to print_bad_page_map()
mm/memory: factor out common code from vm_normal_page_*()
mm: introduce and use vm_normal_page_pud()
mm: rename vm_ops->find_special_page() to vm_ops->find_normal_page()

drivers/xen/Kconfig | 1 +
drivers/xen/gntdev.c | 5 +-
fs/dax.c | 47 +----
include/linux/mm.h | 20 +-
mm/Kconfig | 2 +
mm/huge_memory.c | 119 ++++-------
mm/memory.c | 346 ++++++++++++++++++++++---------
mm/pagewalk.c | 20 +-
tools/testing/vma/vma_internal.h | 18 +-
9 files changed, 343 insertions(+), 235 deletions(-)


base-commit: 760b462b3921c5dc8bfa151d2d27a944e4e96081
--
2.50.1