Re: [RFC PATCH 07/26] hugetlb: add hugetlb_pte to track HugeTLB page table entries
From: Mike Kravetz
Date: Tue Jul 12 2022 - 13:51:31 EST
On 07/12/22 10:42, Dr. David Alan Gilbert wrote:
> * Mike Kravetz (mike.kravetz@xxxxxxxxxx) wrote:
> > On 06/24/22 17:36, James Houghton wrote:
> > > After high-granularity mapping, page table entries for HugeTLB pages can
> > > be of any size/type. (For example, we can have a 1G page mapped with a
> > > mix of PMDs and PTEs.) This struct is to help keep track of a HugeTLB
> > > PTE after we have done a page table walk.
> >
> > This has been rolling around in my head.
> >
> > Will this first use case (live migration) actually make use of this
> > 'mixed mapping' model where hugetlb pages could be mapped at the PUD,
> > PMD and PTE level all within the same vma? I only understand the use
> > case from a high level. But, it seems that we would want to only want
> > to migrate PTE (or PMD) sized pages and not necessarily a mix.
>
> I suspect we would pick one size and use that size for all transfers
> when in postcopy; not sure if there are any side cases though.
>
> > The only reason I ask is because the code might be much simpler if all
> > mappings within a vma were of the same size. Of course, the
> > performance/latency of converting a large mapping may be prohibitively
> > expensive.
>
> Imagine we're migrating a few TB VM, backed by 1GB hugepages, I'm guessing it
> would be nice to clean up the PTE/PMDs for split 1GB pages as they're
> completed rather than having thousands of them for the whole VM.
> (I'm not sure if that is already doable)
Seems that would be doable by calling MADV_COLLAPSE for 1GB pages as
they are completed.
Thanks for information on post copy.
--
Mike Kravetz