[RFC v3 PATCH 0/5] mm: zap pages with read mmap_sem in munmap for large mapping

From: Yang Shi
Date: Fri Jun 29 2018 - 18:40:47 EST



Background:
Recently, when we ran some vm scalability tests on machines with large memory,
we ran into a couple of mmap_sem scalability issues when unmapping large memory
space, please refer to https://lkml.org/lkml/2017/12/14/733 and
https://lkml.org/lkml/2018/2/20/576.


History:
Then akpm suggested to unmap large mapping section by section and drop mmap_sem
at a time to mitigate it (see https://lkml.org/lkml/2018/3/6/784).

V1 patch series was submitted to the mailing list per Andrewâs suggestion
(see https://lkml.org/lkml/2018/3/20/786). Then I received a lot great feedback
and suggestions.

Then this topic was discussed on LSFMM summit 2018. In the summit, Michal Hock
suggested (also in the v1 patches review) to try "two phases" approach. Zapping
pages with read mmap_sem, then doing via cleanup with write mmap_sem (for
discussion detail, see https://lwn.net/Articles/753269/)


Changelog:
v2 â> v3:
* Refactor do_munmap code to extract the common part per Peter's sugestion
* Introduced VM_DEAD flag per Michal's suggestion. Just handled VM_DEAD in
x86's page fault handler for the time being. Other architectures will be covered
once the patch series is reviewed
* Now lookup vma (find and split) and set VM_DEAD flag with write mmap_sem, then
zap mapping with read mmap_sem, then clean up pgtables and vmas with write
mmap_sem per Peter's suggestion

v1 â> v2:
* Re-implemented the code per the discussion on LSFMM summit


Regression and performance data:
Test is run on a machine with 32 cores of E5-2680 @ 2.70GHz and 384GB memory

Regression test with full LTP and trinity (munmap) with setting thresh to 4K in
the code (just for regression test only) so that the new code can be covered
better and trinity (munmap) test manipulates 4K mapping.

No regression issue is reported and the system survives under trinity (munmap)
test for 4 hours until I abort the test.

Throughput of page faults (#/s) with the below stress-ng test:
stress-ng --mmap 0 --mmap-bytes 80G --mmap-file --metrics --perf
--timeout 600s
pristine patched delta
89.41K/sec 97.29K/sec +8.8%

The result is not very stable, and depends on timing. So, this is just for reference.


Yang Shi (5):
uprobes: make vma_has_uprobes non-static
mm: introduce VM_DEAD flag
mm: refactor do_munmap() to extract the common part
mm: mmap: zap pages with read mmap_sem for large mapping
x86: check VM_DEAD flag in page fault

arch/x86/mm/fault.c | 4 ++
include/linux/mm.h | 6 +++
include/linux/uprobes.h | 7 +++
kernel/events/uprobes.c | 2 +-
mm/mmap.c | 243 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++----------------
5 files changed, 224 insertions(+), 38 deletions(-)