Re: [Part1 PATCH v5 00/22] x86, ACPI, numa: Parse numa info earlier

From: Tejun Heo
Date: Mon Jun 17 2013 - 22:04:11 EST


Hello,

On Thu, Jun 13, 2013 at 09:02:47PM +0800, Tang Chen wrote:
> One commit that tried to parse SRAT early get reverted before v3.9-rc1.
>
> | commit e8d1955258091e4c92d5a975ebd7fd8a98f5d30f
> | Author: Tang Chen <tangchen@xxxxxxxxxxxxxx>
> | Date: Fri Feb 22 16:33:44 2013 -0800
> |
> | acpi, memory-hotplug: parse SRAT before memblock is ready
>
> It broke several things, like acpi override and fall back path etc.
>
> This patchset is clean implementation that will parse numa info early.
> 1. keep the acpi table initrd override working by split finding with copying.
> finding is done at head_32.S and head64.c stage,
> in head_32.S, initrd is accessed in 32bit flat mode with phys addr.
> in head64.c, initrd is accessed via kernel low mapping address
> with help of #PF set page table.
> copying is done with early_ioremap just after memblock is setup.
> 2. keep fallback path working. numaq and ACPI and amd_nmua and dummy.
> seperate initmem_init to two stages.
> early_initmem_init will only extract numa info early into numa_meminfo.
> initmem_init will keep slit and emulation handling.
> 3. keep other old code flow untouched like relocate_initrd and initmem_init.
> early_initmem_init will take old init_mem_mapping position.
> it call early_x86_numa_init and init_mem_mapping for every nodes.
> For 64bit, we avoid having size limit on initrd, as relocate_initrd
> is still after init_mem_mapping for all memory.
> 4. last patch will try to put page table on local node, so that memory
> hotplug will be happy.
>
> In short, early_initmem_init will parse numa info early and call
> init_mem_mapping to set page table for every nodes's mem.

So, can you please explain why you're doing the above? What are you
trying to achieve in the end and why is this the best approach? This
is all for memory hotplug, right?

I can understand the part where you're move NUMA discovery before
initializations which will get allocated permanent addresses in the
wrong nodes, but trying to do the same with memblock itself is making
the code extremely fragile. It's nasty because there's nothing
apparent which seems to necessitate such ordering. The ordering looks
rather arbitrary but changing the orders will subtly break memory
hotplug support, which is a really bad way to structure the code.

Can't you just move memblock arrays after NUMA init is complete?
That'd be a lot simpler and way more robust than the proposed changes,
no?

Thanks.

--
tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/