Re: [PATCH] arm64: invalidate TLB before turning MMU on

From: James Morse
Date: Thu Dec 13 2018 - 05:44:08 EST


Hi Qian,

On 13/12/2018 05:22, Qian Cai wrote:
> On this HPE Apollo 70 arm64 server with 256 CPUs, triggering a crash
> dump just hung. It has 4 threads on each core. Each 2-core share a same
> L1 and L2 caches, so that is 8 CPUs shares those. All CPUs share a same
> L3 cache.
>
> It turned out that this was due to the TLB contained stale entries (or
> uninitialized junk which just happened to look valid) from the first
> kernel before turning the MMU on in the second kernel which caused this
> instruction hung,

This is a great find, thanks for debugging this!

The kernel should already handle this, as we don't trust the bootloader to clean
up either.

In arch/arm64/mm/proc.S::__cpu_setup()
|/*
| * __cpu_setup
| *
| * Initialise the processor for turning the MMU on. Return in x0 the
| * value of the SCTLR_EL1 register.
| */
| .pushsection ".idmap.text", "awx"
| ENTRY(__cpu_setup)
| tlbi vmalle1 // Invalidate local TLB
| dsb nsh

This is called from stext, which then branches to __primary_switch(), which
calls __enable_mmu() where you see this problem. It shouldn't not be possible to
allocate new tlb entries between these points...

Do you have CONFIG_RANDOMIZE_BASE disabled? This causes enable_mmu() to be
called twice, the extra tlb maintenance is in __primary_switch.
(if it works with this turned off, it points to the extra off/tlbi/on sequence).


> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> index 4471f570a295..5196f3d729de 100644
> --- a/arch/arm64/kernel/head.S
> +++ b/arch/arm64/kernel/head.S
> @@ -771,6 +771,10 @@ ENTRY(__enable_mmu)
> msr ttbr0_el1, x2 // load TTBR0
> msr ttbr1_el1, x1 // load TTBR1
> isb
> + dsb nshst
> + tlbi vmalle1 // invalidate TLB
> + dsb nsh
> + isb
> msr sctlr_el1, x0
> isb

The overall change here is that we do extra maintenance later.

Can move this around to bisect where the TLB entries are either coming from, or
failing-to-be invalidated?
Do your first and kdump kernels have the same VA_BITS/PAGE_SIZE?
As a stab in the dark, (totally untested):
------------------------------%<------------------------------
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 2c75b0b903ae..a5f3b7314bda 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -406,9 +406,6 @@ ENDPROC(idmap_kpti_install_ng_mappings)
*/
.pushsection ".idmap.text", "awx"
ENTRY(__cpu_setup)
- tlbi vmalle1 // Invalidate local TLB
- dsb nsh
-
mov x0, #3 << 20
msr cpacr_el1, x0 // Enable FP/ASIMD
mov x0, #1 << 12 // Reset mdscr_el1 and disable
@@ -465,5 +462,10 @@ ENTRY(__cpu_setup)
1:
#endif /* CONFIG_ARM64_HW_AFDBM */
msr tcr_el1, x10
+ isb
+
+ tlbi vmalle1 // Invalidate local TLB
+ dsb nsh
+
ret // return to head.S
ENDPROC(__cpu_setup)
------------------------------%<------------------------------


Thanks,

James