[PATCH 0/4] mm: arm64: bring up BATCHED_UNMAP_TLB_FLUSH

From: Barry Song
Date: Thu Jul 07 2022 - 08:53:31 EST


Though ARM64 has the hardware to do tlb shootdown, it is not free.
A simplest micro benchmark shows even on snapdragon 888 with only
8 cores, the overhead for ptep_clear_flush is huge even for paging
out one page mapped by only one process:
5.36% a.out [kernel.kallsyms] [k] ptep_clear_flush

While pages are mapped by multiple processes or HW has more CPUs,
the cost should become even higher due to the bad scalability of
tlb shootdown.

This patchset leverages the existing BATCHED_UNMAP_TLB_FLUSH by
1. only send tlbi instructions in the first stage -
arch_tlbbatch_add_mm()
2. wait for the completion of tlbi by dsb while doing tlbbatch
sync in arch_tlbbatch_flush()
My testing on snapdragon shows the overhead of ptep_clear_flush
is removed by the patchset. The micro benchmark becomes 5% faster
even for one page mapped by single process on snapdragon 888.

While believing the micro benchmark in 4/4 will perform better
on arm64 servers, I don't have a hardware to test. Thus,
Hi Yicong,
Would you like to run the same test in 4/4 on Kunpeng920?
Hi Darren,
Would you like to run the same test in 4/4 on Ampere's ARM64 server?
Remember to enable zRAM swap device so that pageout can actually
work for the micro benchmark.
thanks!

Barry Song (4):
Revert "Documentation/features: mark BATCHED_UNMAP_TLB_FLUSH doesn't
apply to ARM64"
mm: rmap: Allow platforms without mm_cpumask to defer TLB flush
mm: rmap: Extend tlbbatch APIs to fit new platforms
arm64: support batched/deferred tlb shootdown during page reclamation

Documentation/features/arch-support.txt | 1 -
.../features/vm/TLB/arch-support.txt | 2 +-
arch/arm64/Kconfig | 1 +
arch/arm64/include/asm/tlbbatch.h | 12 +++++++++++
arch/arm64/include/asm/tlbflush.h | 13 ++++++++++++
arch/x86/include/asm/tlbflush.h | 4 +++-
mm/rmap.c | 21 +++++++++++++------
7 files changed, 45 insertions(+), 9 deletions(-)
create mode 100644 arch/arm64/include/asm/tlbbatch.h

--
2.25.1