Re: [PATCH v3] mm/sparse.c: Use kvmalloc_node/kvfree to alloc/free memmap for the classic sparse

From: Baoquan He
Date: Fri Mar 13 2020 - 20:53:49 EST


On 03/13/20 at 03:56pm, Michal Hocko wrote:
> On Thu 12-03-20 22:17:49, Baoquan He wrote:
> > This change makes populate_section_memmap()/depopulate_section_memmap
> > much simpler.
>
> Not only and you should make it more explicit. It also tries to allocate
> memmaps from the target numa node so this is a functional change. I
> would prefer to have that in a separate patch in case we hit some weird
> NUMA setups which would choke on memory less nodes and similar horrors.

Yes, splitting sounds more reasonable, I would love to do that. One
question is I noticed Andrew had picked this into -mm tree, if I post a
new patchset including these two small patches, whether it's convenient
to drop the old one and get these two merged.

Sorry, I don't know very well how this works in mm maintaining.

>
> > Suggested-by: Michal Hocko <mhocko@xxxxxxxxxx>
> > Signed-off-by: Baoquan He <bhe@xxxxxxxxxx>
>
> I do not see any reason this shouldn't work. Btw. did you get to test
> it?
>
> Feel free to add
> Acked-by: Michal Hocko <mhocko@xxxxxxxx>
> to both patches if you go and split.
>
> > ---
> > v2->v3:
> > Remove __GFP_NOWARN and use array_size when calling kvmalloc_node()
> > per Matthew's comments.
> >
> > mm/sparse.c | 27 +++------------------------
> > 1 file changed, 3 insertions(+), 24 deletions(-)
> >
> > diff --git a/mm/sparse.c b/mm/sparse.c
> > index bf6c00a28045..bb99633575b5 100644
> > --- a/mm/sparse.c
> > +++ b/mm/sparse.c
> > @@ -734,35 +734,14 @@ static void free_map_bootmem(struct page *memmap)
> > struct page * __meminit populate_section_memmap(unsigned long pfn,
> > unsigned long nr_pages, int nid, struct vmem_altmap *altmap)
> > {
> > - struct page *page, *ret;
> > - unsigned long memmap_size = sizeof(struct page) * PAGES_PER_SECTION;
> > -
> > - page = alloc_pages(GFP_KERNEL|__GFP_NOWARN, get_order(memmap_size));
> > - if (page)
> > - goto got_map_page;
> > -
> > - ret = vmalloc(memmap_size);
> > - if (ret)
> > - goto got_map_ptr;
> > -
> > - return NULL;
> > -got_map_page:
> > - ret = (struct page *)pfn_to_kaddr(page_to_pfn(page));
> > -got_map_ptr:
> > -
> > - return ret;
> > + return kvmalloc_node(array_size(sizeof(struct page),
> > + PAGES_PER_SECTION), GFP_KERNEL, nid);
> > }
> >
> > static void depopulate_section_memmap(unsigned long pfn, unsigned long nr_pages,
> > struct vmem_altmap *altmap)
> > {
> > - struct page *memmap = pfn_to_page(pfn);
> > -
> > - if (is_vmalloc_addr(memmap))
> > - vfree(memmap);
> > - else
> > - free_pages((unsigned long)memmap,
> > - get_order(sizeof(struct page) * PAGES_PER_SECTION));
> > + kvfree(pfn_to_page(pfn));
> > }
> >
> > static void free_map_bootmem(struct page *memmap)
> > --
> > 2.17.2
> >
>
> --
> Michal Hocko
> SUSE Labs
>