Re: [PATCH v2] arm64: mm: reserve hugetlb CMA after numa_init

From: Roman Gushchin
Date: Wed Jun 17 2020 - 14:22:18 EST


On Wed, Jun 17, 2020 at 11:38:03AM +0000, Song Bao Hua (Barry Song) wrote:
>
>
> > -----Original Message-----
> > From: Will Deacon [mailto:will@xxxxxxxxxx]
> > Sent: Wednesday, June 17, 2020 10:18 PM
> > To: Song Bao Hua (Barry Song) <song.bao.hua@xxxxxxxxxxxxx>
> > Cc: catalin.marinas@xxxxxxx; nsaenzjulienne@xxxxxxx;
> > steve.capper@xxxxxxx; rppt@xxxxxxxxxxxxx; akpm@xxxxxxxxxxxxxxxxxxxx;
> > linux-arm-kernel@xxxxxxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx; Linuxarm
> > <linuxarm@xxxxxxxxxx>; Matthias Brugger <matthias.bgg@xxxxxxxxx>;
> > Roman Gushchin <guro@xxxxxx>
> > Subject: Re: [PATCH v2] arm64: mm: reserve hugetlb CMA after numa_init
> >
> > On Wed, Jun 17, 2020 at 10:19:24AM +1200, Barry Song wrote:
> > > hugetlb_cma_reserve() is called at the wrong place. numa_init has not been
> > > done yet. so all reserved memory will be located at node0.
> > >
> > > Fixes: cf11e85fc08c ("mm: hugetlb: optionally allocate gigantic hugepages
> > using cma")
> >
> > Damn, wasn't CC'd on that :/
> >
> > > Cc: Matthias Brugger <matthias.bgg@xxxxxxxxx>
> > > Acked-by: Roman Gushchin <guro@xxxxxx>
> > > Signed-off-by: Barry Song <song.bao.hua@xxxxxxxxxxxxx>
> > > ---
> > > -v2: add Fixes tag according to Matthias Brugger's comment
> > >
> > > arch/arm64/mm/init.c | 10 +++++-----
> > > 1 file changed, 5 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> > > index e631e6425165..41914b483d54 100644
> > > --- a/arch/arm64/mm/init.c
> > > +++ b/arch/arm64/mm/init.c
> > > @@ -404,11 +404,6 @@ void __init arm64_memblock_init(void)
> > > high_memory = __va(memblock_end_of_DRAM() - 1) + 1;
> > >
> > > dma_contiguous_reserve(arm64_dma32_phys_limit);
> > > -
> > > -#ifdef CONFIG_ARM64_4K_PAGES
> > > - hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
> > > -#endif
> >
> > Why is this dependent on CONFIG_ARM64_4K_PAGES? We unconditionally
> > select ARCH_HAS_GIGANTIC_PAGE so this seems unnecessary.
>
> Roman, would you like to answer this question? Have you found any problem if system
> doesn't set 4K_PAGES?

No, I was just following the code in arch/arm64/mm/hugetlbpage.c where all
related to PUD-sized pages is guarded by CONFIG_ARM64_4K_PAGES.
Actually I did all my testing on x86-64, I don't even have any arm hardware.

I'm totally fine with removing this #ifdef if it's not needed.

Thanks!

>
> >
> > > -
> > > }
> > >
> > > void __init bootmem_init(void)
> > > @@ -424,6 +419,11 @@ void __init bootmem_init(void)
> > > min_low_pfn = min;
> > >
> > > arm64_numa_init();
> > > +
> > > +#ifdef CONFIG_ARM64_4K_PAGES
> > > + hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
> > > +#endif
> >
> > A comment here wouldn't hurt, as it does look a lot more natural next
> > to dma_contiguous_reserve().
>
> Will add some comment here.
>
> >
> > Will
>
> barry