Re: [PATCH v3 12/16] mm: remember exclusively mapped anonymous pages with PG_anon_exclusive

From: David Hildenbrand
Date: Wed Apr 13 2022 - 12:40:06 EST


On 13.04.22 18:28, Vlastimil Babka wrote:
> On 3/29/22 18:04, David Hildenbrand wrote:
>> Let's mark exclusively mapped anonymous pages with PG_anon_exclusive as
>> exclusive, and use that information to make GUP pins reliable and stay
>> consistent with the page mapped into the page table even if the
>> page table entry gets write-protected.
>>
>> With that information at hand, we can extend our COW logic to always
>> reuse anonymous pages that are exclusive. For anonymous pages that
>> might be shared, the existing logic applies.
>>
>> As already documented, PG_anon_exclusive is usually only expressive in
>> combination with a page table entry. Especially PTE vs. PMD-mapped
>> anonymous pages require more thought, some examples: due to mremap() we
>> can easily have a single compound page PTE-mapped into multiple page tables
>> exclusively in a single process -- multiple page table locks apply.
>> Further, due to MADV_WIPEONFORK we might not necessarily write-protect
>> all PTEs, and only some subpages might be pinned. Long story short: once
>> PTE-mapped, we have to track information about exclusivity per sub-page,
>> but until then, we can just track it for the compound page in the head
>> page and not having to update a whole bunch of subpages all of the time
>> for a simple PMD mapping of a THP.
>>
>> For simplicity, this commit mostly talks about "anonymous pages", while
>> it's for THP actually "the part of an anonymous folio referenced via
>> a page table entry".
>>
>> To not spill PG_anon_exclusive code all over the mm code-base, we let
>> the anon rmap code to handle all PG_anon_exclusive logic it can easily
>> handle.
>>
>> If a writable, present page table entry points at an anonymous (sub)page,
>> that (sub)page must be PG_anon_exclusive. If GUP wants to take a reliably
>> pin (FOLL_PIN) on an anonymous page references via a present
>> page table entry, it must only pin if PG_anon_exclusive is set for the
>> mapped (sub)page.
>>
>> This commit doesn't adjust GUP, so this is only implicitly handled for
>> FOLL_WRITE, follow-up commits will teach GUP to also respect it for
>> FOLL_PIN without !FOLL_WRITE, to make all GUP pins of anonymous pages
>
> without FOLL_WRITE ?

Indeed, thanks.

>
>> fully reliable.
>
> <snip>
>
>> @@ -202,11 +203,26 @@ static inline int is_writable_migration_entry(swp_entry_t entry)
>> return unlikely(swp_type(entry) == SWP_MIGRATION_WRITE);
>> }
>>
>> +static inline int is_readable_migration_entry(swp_entry_t entry)
>> +{
>> + return unlikely(swp_type(entry) == SWP_MIGRATION_READ);
>> +}
>> +
>> +static inline int is_readable_exclusive_migration_entry(swp_entry_t entry)
>> +{
>> + return unlikely(swp_type(entry) == SWP_MIGRATION_READ_EXCLUSIVE);
>> +}
>
> This one seems to be missing a !CONFIG_MIGRATION counterpart. Although the
> only caller __split_huge_pmd_locked() probably indirectly only exists with
> CONFIG_MIGRATION so it's not an immediate issue. (THP selects COMPACTION
> selects MIGRATION)

So far no builds bailed out. And yes, I think it's for the reason
stated. THP without compaction would be a lost bet.

>
> <snip>
>
>> @@ -3035,10 +3083,19 @@ void set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw,
>>
>> flush_cache_range(vma, address, address + HPAGE_PMD_SIZE);
>> pmdval = pmdp_invalidate(vma, address, pvmw->pmd);
>> +
>> + anon_exclusive = PageAnon(page) && PageAnonExclusive(page);
>> + if (anon_exclusive && page_try_share_anon_rmap(page)) {
>> + set_pmd_at(mm, address, pvmw->pmd, pmdval);
>> + return;
>
> I am admittedly not too familiar with this code, but looks like this means
> we fail to migrate the THP, right? But we don't seem to be telling the
> caller, which is try_to_migrate_one(), so it will continue and not terminate
> the walk and return false?

Right, we're not returning "false". Returning "false" would be an
optimization to make rmap_walk_anon() fail faster.

But, after all, the THP is exclusive (-> single mapping), so
anon_vma_interval_tree_foreach() would most probably not have a lot work
to do either way I'd assume?

In any case, once we return from try_to_migrate(), the page will still
be mapped.

--
Thanks,

David / dhildenb