Re: [PATCH v2] mm: hugetlb: optionally allocate gigantic hugepages using cma

From: Rik van Riel
Date: Sun Mar 15 2020 - 21:08:59 EST


On Tue, 2020-03-10 at 18:37 +0100, Michal Hocko wrote:
> On Tue 10-03-20 10:25:59, Roman Gushchin wrote:
> > Well, so far I was focused on a particular case when the target cma
> > size
> > is significantly smaller than the total RAM size (~5-10%). What is
> > the right
> > thing to do here? Fallback to the current behavior if the requested
> > size is
> > more than x% of total memory? 1/2? How do you think?
>
> I would start by excluding restricted kernel zones (<ZONE_NORMAL).
> Cutting off 1G of ZONE_DMA32 might be a real problem.

It looks like memblock_find_in_range_node(), which
is called from memblock_alloc_range_nid(), will already
do top-down allocation inside each node.

However, looking at that code some more, it has some
limitations that we might not want. Specifically, if
we want to allocate for example a 16GB CMA area, but
the node in question only has a 15GB available area
in one spot and a 1GB available area in another spot,
for example due to memory holes, the allocation will fail.

I wonder if it makes sense to have separate cma_declare_contiguous
calls for each 1GB page we set up. That way it will be easier
to round-robin between the ZONE_NORMAL zones in each node, and
also to avoid the ZONE_DMA32 and other special nodes on systems
where those are a relatively small part of memory.

I'll whip up a patch to do that.

--
All Rights Reversed.

Attachment: signature.asc
Description: This is a digitally signed message part