Re: [PATCH 2/3] mm/sparse: Optimize sparse_add_one_section()

From: Mike Rapoport
Date: Wed Mar 20 2019 - 07:27:42 EST


On Wed, Mar 20, 2019 at 06:13:18PM +0800, Baoquan He wrote:
> Hi Mike,
>
> On 03/20/19 at 09:56am, Mike Rapoport wrote:
> > @@ -697,16 +697,17 @@ int __meminit sparse_add_one_section(int nid, unsigned long start_pfn,
> > > ret = sparse_index_init(section_nr, nid);
> > > if (ret < 0 && ret != -EEXIST)
> > > return ret;
> > > - ret = 0;
> > > - memmap = kmalloc_section_memmap(section_nr, nid, altmap);
> > > - if (!memmap)
> > > - return -ENOMEM;
> > > +
> > > usemap = __kmalloc_section_usemap();
> > > - if (!usemap) {
> > > - __kfree_section_memmap(memmap, altmap);
> > > + if (!usemap)
> > > + return -ENOMEM;
> > > + memmap = kmalloc_section_memmap(section_nr, nid, altmap);
> > > + if (!memmap) {
> > > + kfree(usemap);
> >
> > If you are anyway changing this why not to switch to goto's for error
> > handling?
>
> I update code change as below, could you check if it's OK to you?
>
> Thanks
> Baoquan
>
> From 39b679b6f34f6acbc05351be8569d23bae3c0458 Mon Sep 17 00:00:00 2001
> From: Baoquan He <bhe@xxxxxxxxxx>
> Date: Fri, 15 Mar 2019 16:03:52 +0800
> Subject: [PATCH] mm/sparse: Optimize sparse_add_one_section()
>
> Reorder the allocation of usemap and memmap since usemap allocation
> is much smaller and simpler. Otherwise hard work is done to make
> memmap ready, then have to rollback just because of usemap allocation
> failure.
>
> Meanwhile update the error handler to cover usemap allocation failure
> too.
>
> Signed-off-by: Baoquan He <bhe@xxxxxxxxxx>
> ---
> mm/sparse.c | 23 ++++++++++++-----------
> 1 file changed, 12 insertions(+), 11 deletions(-)
>
> diff --git a/mm/sparse.c b/mm/sparse.c
> index a99e0b253927..0e842b924be6 100644
> --- a/mm/sparse.c
> +++ b/mm/sparse.c
> @@ -699,20 +699,21 @@ int __meminit sparse_add_one_section(int nid, unsigned long start_pfn,
> ret = sparse_index_init(section_nr, nid);
> if (ret < 0 && ret != -EEXIST)
> return ret;
> - ret = 0;
> - memmap = kmalloc_section_memmap(section_nr, nid, altmap);
> - if (!memmap)
> - return -ENOMEM;
> +
> usemap = __kmalloc_section_usemap();
> - if (!usemap) {
> - __kfree_section_memmap(memmap, altmap);
> + if (!usemap)
> return -ENOMEM;
> + memmap = kmalloc_section_memmap(section_nr, nid, altmap);
> + if (!memmap) {
> + ret = -ENOMEM;
> + goto out2;

I'd name the label out_free_usemap.

> }
>
> + ret = 0;
> ms = __pfn_to_section(start_pfn);
> if (ms->section_mem_map & SECTION_MARKED_PRESENT) {
> ret = -EEXIST;
> - goto out;
> + goto out2;
> }

I've missed this previously, but it seems that this check can be moved
before the allocations, which simplifies the code a bit more.

>
> /*
> @@ -724,11 +725,11 @@ int __meminit sparse_add_one_section(int nid, unsigned long start_pfn,
> section_mark_present(ms);
> sparse_init_one_section(ms, section_nr, memmap, usemap);
>
> + return ret;
> out:
> - if (ret < 0) {
> - kfree(usemap);
> - __kfree_section_memmap(memmap, altmap);
> - }
> + __kfree_section_memmap(memmap, altmap);
> +out2:
> + kfree(usemap);
> return ret;
> }
>
> --
> 2.17.2
>

--
Sincerely yours,
Mike.