Re: [v2 02/11] mm/thp: zone_device awareness in THP handling code
From: Zi Yan
Date: Wed Jul 30 2025 - 07:29:25 EST
On 30 Jul 2025, at 7:16, Mika Penttilä wrote:
> Hi,
>
> On 7/30/25 12:21, Balbir Singh wrote:
>> Make THP handling code in the mm subsystem for THP pages aware of zone
>> device pages. Although the code is designed to be generic when it comes
>> to handling splitting of pages, the code is designed to work for THP
>> page sizes corresponding to HPAGE_PMD_NR.
>>
>> Modify page_vma_mapped_walk() to return true when a zone device huge
>> entry is present, enabling try_to_migrate() and other code migration
>> paths to appropriately process the entry. page_vma_mapped_walk() will
>> return true for zone device private large folios only when
>> PVMW_THP_DEVICE_PRIVATE is passed. This is to prevent locations that are
>> not zone device private pages from having to add awareness. The key
>> callback that needs this flag is try_to_migrate_one(). The other
>> callbacks page idle, damon use it for setting young/dirty bits, which is
>> not significant when it comes to pmd level bit harvesting.
>>
>> pmd_pfn() does not work well with zone device entries, use
>> pfn_pmd_entry_to_swap() for checking and comparison as for zone device
>> entries.
>>
>> Zone device private entries when split via munmap go through pmd split,
>> but need to go through a folio split, deferred split does not work if a
>> fault is encountered because fault handling involves migration entries
>> (via folio_migrate_mapping) and the folio sizes are expected to be the
>> same there. This introduces the need to split the folio while handling
>> the pmd split. Because the folio is still mapped, but calling
>> folio_split() will cause lock recursion, the __split_unmapped_folio()
>> code is used with a new helper to wrap the code
>> split_device_private_folio(), which skips the checks around
>> folio->mapping, swapcache and the need to go through unmap and remap
>> folio.
>>
>> Cc: Karol Herbst <kherbst@xxxxxxxxxx>
>> Cc: Lyude Paul <lyude@xxxxxxxxxx>
>> Cc: Danilo Krummrich <dakr@xxxxxxxxxx>
>> Cc: David Airlie <airlied@xxxxxxxxx>
>> Cc: Simona Vetter <simona@xxxxxxxx>
>> Cc: "Jérôme Glisse" <jglisse@xxxxxxxxxx>
>> Cc: Shuah Khan <shuah@xxxxxxxxxx>
>> Cc: David Hildenbrand <david@xxxxxxxxxx>
>> Cc: Barry Song <baohua@xxxxxxxxxx>
>> Cc: Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx>
>> Cc: Ryan Roberts <ryan.roberts@xxxxxxx>
>> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx>
>> Cc: Peter Xu <peterx@xxxxxxxxxx>
>> Cc: Zi Yan <ziy@xxxxxxxxxx>
>> Cc: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx>
>> Cc: Jane Chu <jane.chu@xxxxxxxxxx>
>> Cc: Alistair Popple <apopple@xxxxxxxxxx>
>> Cc: Donet Tom <donettom@xxxxxxxxxxxxx>
>> Cc: Mika Penttilä <mpenttil@xxxxxxxxxx>
>> Cc: Matthew Brost <matthew.brost@xxxxxxxxx>
>> Cc: Francois Dugast <francois.dugast@xxxxxxxxx>
>> Cc: Ralph Campbell <rcampbell@xxxxxxxxxx>
>>
>> Signed-off-by: Matthew Brost <matthew.brost@xxxxxxxxx>
>> Signed-off-by: Balbir Singh <balbirs@xxxxxxxxxx>
>> ---
>> include/linux/huge_mm.h | 1 +
>> include/linux/rmap.h | 2 +
>> include/linux/swapops.h | 17 +++
>> mm/huge_memory.c | 268 +++++++++++++++++++++++++++++++++-------
>> mm/page_vma_mapped.c | 13 +-
>> mm/pgtable-generic.c | 6 +
>> mm/rmap.c | 22 +++-
>> 7 files changed, 278 insertions(+), 51 deletions(-)
>>
<snip>
>> +/**
>> + * split_huge_device_private_folio - split a huge device private folio into
>> + * smaller pages (of order 0), currently used by migrate_device logic to
>> + * split folios for pages that are partially mapped
>> + *
>> + * @folio: the folio to split
>> + *
>> + * The caller has to hold the folio_lock and a reference via folio_get
>> + */
>> +int split_device_private_folio(struct folio *folio)
>> +{
>> + struct folio *end_folio = folio_next(folio);
>> + struct folio *new_folio;
>> + int ret = 0;
>> +
>> + /*
>> + * Split the folio now. In the case of device
>> + * private pages, this path is executed when
>> + * the pmd is split and since freeze is not true
>> + * it is likely the folio will be deferred_split.
>> + *
>> + * With device private pages, deferred splits of
>> + * folios should be handled here to prevent partial
>> + * unmaps from causing issues later on in migration
>> + * and fault handling flows.
>> + */
>> + folio_ref_freeze(folio, 1 + folio_expected_ref_count(folio));
>
> Why can't this freeze fail? The folio is still mapped afaics, why can't there be other references in addition to the caller?
Based on my off-list conversation with Balbir, the folio is unmapped in
CPU side but mapped in the device. folio_ref_freeeze() is not aware of
device side mapping.
>
>> + ret = __split_unmapped_folio(folio, 0, &folio->page, NULL, NULL, true);
>
> Confusing to __split_unmapped_folio() if folio is mapped...
>From driver point of view, __split_unmapped_folio() probably should be renamed
to __split_cpu_unmapped_folio(), since it is only dealing with CPU side
folio meta data for split.
Best Regards,
Yan, Zi