[RFC PATCH 0/5] mm, memory_hotplug: allocate memmap from hotadded memory

From: Michal Hocko
Date: Wed Jul 26 2017 - 04:33:56 EST


Hi,
this is another step to make the memory hotplug more usable. The primary
goal of this patchset is to reduce memory overhead of the hot added
memory (at least for SPARSE_VMEMMAP memory model). Currently we use
kmalloc to poppulate memmap (struct page array) which has two main
drawbacks a) it consumes an additional memory until the hotadded memory
itslef is onlined and b) memmap might end up on a different numa node
which is especially true for movable_node configuration.

a) is problem especially for memory hotplug based memory "ballooning"
solutions when the delay between physical memory hotplug and the
onlining can lead to OOM and that led to introduction of hacks like auto
onlining (see 31bc3858ea3e ("memory-hotplug: add automatic onlining
policy for the newly added memory")).
b) can have performance drawbacks.

One way to mitigate both issues is to simply allocate memmap array
(which is the largest memory footprint of the physical memory hotplug)
from the hotadded memory itself. VMEMMAP memory model allows us to map
any pfn range so the memory doesn't need to be online to be usable
for the array. See patch 3 for more details. In short I am reusing an
existing vmem_altmap which wants to achieve the same thing for nvdim
device memory.

I am sending this as an RFC because this has seen only a very limited
testing and I am mostly interested about opinions on the chosen
approach. I had to touch some arch code and I have no idea whether my
changes make sense there (especially ppc). Therefore I would highly
appreciate arch maintainers to check patch 2.

Patches 4 and 5 should be straightforward cleanups.

There is also one potential drawback, though. If somebody uses memory
hotplug for 1G (gigantic) hugetlb pages then this scheme will not work
for them obviously because each memory section will contain 2MB reserved
area. I am not really sure somebody does that and how reliable that
can work actually. Nevertheless, I _believe_ that onlining more memory
into virtual machines is much more common usecase. Anyway if there ever
is a strong demand for such a usecase we have basically 3 options a)
enlarge memory sections b) enhance altmap allocation strategy and reuse
low memory sections to host memmaps of other sections on the same NUMA
node c) have the memmap allocation strategy configurable to fallback to
the current allocation.

Are there any other concerns, ideas, comments?

The patches is based on the current mmotm tree (mmotm-2017-07-12-15-11)

Diffstat says
arch/arm64/mm/mmu.c | 9 ++++--
arch/ia64/mm/discontig.c | 4 ++-
arch/powerpc/mm/init_64.c | 34 ++++++++++++++++------
arch/s390/mm/vmem.c | 7 +++--
arch/sparc/mm/init_64.c | 6 ++--
arch/x86/mm/init_64.c | 13 +++++++--
include/linux/memory_hotplug.h | 7 +++--
include/linux/memremap.h | 34 +++++++++++++++-------
include/linux/mm.h | 25 ++++++++++++++--
include/linux/page-flags.h | 18 ++++++++++++
kernel/memremap.c | 6 ----
mm/compaction.c | 3 ++
mm/memory_hotplug.c | 66 +++++++++++++++++++-----------------------
mm/page_alloc.c | 25 ++++++++++++++--
mm/page_isolation.c | 11 ++++++-
mm/sparse-vmemmap.c | 13 +++++++--
mm/sparse.c | 36 ++++++++++++++++-------
17 files changed, 223 insertions(+), 94 deletions(-)

Shortlog
Michal Hocko (5):
mm, memory_hotplug: cleanup memory offline path
mm, arch: unify vmemmap_populate altmap handling
mm, memory_hotplug: allocate memmap from the added memory range for sparse-vmemmap
mm, sparse: complain about implicit altmap usage in vmemmap_populate
mm, sparse: rename kmalloc_section_memmap, __kfree_section_memmap