Re: [External] Re: [PATCH v13 05/12] mm: hugetlb: allocate the vmemmap pages associated with each HugeTLB page

From: Muchun Song
Date: Fri Jan 29 2021 - 01:58:11 EST


On Fri, Jan 29, 2021 at 9:04 AM Mike Kravetz <mike.kravetz@xxxxxxxxxx> wrote:
>
> On 1/28/21 4:37 AM, Muchun Song wrote:
> > On Wed, Jan 27, 2021 at 6:36 PM David Hildenbrand <david@xxxxxxxxxx> wrote:
> >>
> >> On 26.01.21 16:56, David Hildenbrand wrote:
> >>> On 26.01.21 16:34, Oscar Salvador wrote:
> >>>> On Tue, Jan 26, 2021 at 04:10:53PM +0100, David Hildenbrand wrote:
> >>>>> The real issue seems to be discarding the vmemmap on any memory that has
> >>>>> movability constraints - CMA and ZONE_MOVABLE; otherwise, as discussed, we
> >>>>> can reuse parts of the thingy we're freeing for the vmemmap. Not that it
> >>>>> would be ideal: that once-a-huge-page thing will never ever be a huge page
> >>>>> again - but if it helps with OOM in corner cases, sure.
> >>>>
> >>>> Yes, that is one way, but I am not sure how hard would it be to implement.
> >>>> Plus the fact that as you pointed out, once that memory is used for vmemmap
> >>>> array, we cannot use it again.
> >>>> Actually, we would fragment the memory eventually?
> >>>>
> >>>>> Possible simplification: don't perform the optimization for now with free
> >>>>> huge pages residing on ZONE_MOVABLE or CMA. Certainly not perfect: what
> >>>>> happens when migrating a huge page from ZONE_NORMAL to (ZONE_MOVABLE|CMA)?
> >>>>
> >>>> But if we do not allow theose pages to be in ZONE_MOVABLE or CMA, there is no
> >>>> point in migrate them, right?
> >>>
> >>> Well, memory unplug "could" still work and migrate them and
> >>> alloc_contig_range() "could in the future" still want to migrate them
> >>> (virtio-mem, gigantic pages, powernv memtrace). Especially, the latter
> >>> two don't work with ZONE_MOVABLE/CMA. But, I mean, it would be fair
> >>> enough to say "there are no guarantees for
> >>> alloc_contig_range()/offline_pages() with ZONE_NORMAL, so we can break
> >>> these use cases when a magic switch is flipped and make these pages
> >>> non-migratable anymore".
> >>>
> >>> I assume compaction doesn't care about huge pages either way, not sure
> >>> about numa balancing etc.
> >>>
> >>>
> >>> However, note that there is a fundamental issue with any approach that
> >>> allocates a significant amount of unmovable memory for user-space
> >>> purposes (excluding CMA allocations for unmovable stuff, CMA is
> >>> special): pairing it with ZONE_MOVABLE becomes very tricky as your user
> >>> space might just end up eating all kernel memory, although the system
> >>> still looks like there is plenty of free memory residing in
> >>> ZONE_MOVABLE. I mentioned that in the context of secretmem in a reduced
> >>> form as well.
> >>>
> >>> We theoretically have that issue with dynamic allocation of gigantic
> >>> pages, but it's something a user explicitly/rarely triggers and it can
> >>> be documented to cause problems well enough. We'll have the same issue
> >>> with GUP+ZONE_MOVABLE that Pavel is fixing right now - but GUP is
> >>> already known to be broken in various ways and that it has to be treated
> >>> in a special way. I'd like to limit the nasty corner cases.
> >>>
> >>> Of course, we could have smart rules like "don't online memory to
> >>> ZONE_MOVABLE automatically when the magic switch is active". That's just
> >>> ugly, but could work.
> >>>
> >>
> >> Extending on that, I just discovered that only x86-64, ppc64, and arm64
> >> really support hugepage migration.
> >>
> >> Maybe one approach with the "magic switch" really would be to disable
> >> hugepage migration completely in hugepage_migration_supported(), and
> >> consequently making hugepage_movable_supported() always return false.
> >>
> >> Huge pages would never get placed onto ZONE_MOVABLE/CMA and cannot be
> >> migrated. The problem I describe would apply (careful with using
> >> ZONE_MOVABLE), but well, it can at least be documented.
> >
> > Thanks for your explanation.
> >
> > All thinking seems to be introduced by encountering OOM. :-(
>
> Yes. Or, I think about it as the problem of not being able to dissolve (free
> to buddy) a hugetlb page. We can not dissolve because we can not allocate
> vmemmap for all sumpages.
>
> > In order to move forward and free the hugepage. We should add some
> > restrictions below.
> >
> > 1. Only free the hugepage which is allocated from the ZONE_NORMAL.
> Corrected: Only vmemmap optimize hugepages in ZONE_NORMAL
>
> > 2. Disable hugepage migration when this feature is enabled.
>
> I am not sure if we want to fully disable migration. I may be misunderstanding
> but the thought was to prevent migration between some movability types. It
> seems we should be able to migrate form ZONE_NORMAL to ZONE_NORMAL.
>
> Also, if we do allow huge pages without vmemmap optimization in MOVABLE or CMA
> then we should allow those to be migrated to NORMAL? Or is there a reason why
> we should prevent that.
>
> > 3. Using GFP_ATOMIC to allocate vmemmap pages firstly (it can reduce
> > memory fragmentation), if it fails, we use part of the hugepage to
> > remap.
>
> I honestly am not sure about this. This would only happen for pages in
> NORMAL. The only time using part of the huge page for vmemmap would help is
> if we are trying to dissolve huge pages to free up memory for other uses.
>
> > What's your opinion about this? Should we take this approach?
>
> I think trying to solve all the issues that could happen as the result of
> not being able to dissolve a hugetlb page has made this extremely complex.
> I know this is something we need to address/solve. We do not want to add
> more unexpected behavior in corner cases. However, I can not help but think
> about similar issues today. For example, if a huge page is in use in
> ZONE_MOVABLE or CMA there is no guarantee that it can be migrated today.
> Correct? We may need to allocate another huge page for the target of the
> migration, and there is no guarantee we can do that.

Yeah. Adding more restrictions makes things more complex. As you
and Oscar said, refusing to free hugepage when allocating
vmemmap pages fail may be an easy way now.

> --
> Mike Kravetz