[PATCH 5.1 096/121] riscv: mm: synchronize MMU after pte change

From: Greg Kroah-Hartman
Date: Mon Jun 24 2019 - 06:17:03 EST


From: ShihPo Hung <shihpo.hung@xxxxxxxxxx>

commit bf587caae305ae3b4393077fb22c98478ee55755 upstream.

Because RISC-V compliant implementations can cache invalid entries
in TLB, an SFENCE.VMA is necessary after changes to the page table.
This patch adds an SFENCE.vma for the vmalloc_fault path.

Signed-off-by: ShihPo Hung <shihpo.hung@xxxxxxxxxx>
[paul.walmsley@xxxxxxxxxx: reversed tab->whitespace conversion,
wrapped comment lines]
Signed-off-by: Paul Walmsley <paul.walmsley@xxxxxxxxxx>
Cc: Palmer Dabbelt <palmer@xxxxxxxxxx>
Cc: Albert Ou <aou@xxxxxxxxxxxxxxxxx>
Cc: Paul Walmsley <paul.walmsley@xxxxxxxxxx>
Cc: linux-riscv@xxxxxxxxxxxxxxxxxxx
Cc: stable@xxxxxxxxxxxxxxx
Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>

---
arch/riscv/mm/fault.c | 13 +++++++++++++
1 file changed, 13 insertions(+)

--- a/arch/riscv/mm/fault.c
+++ b/arch/riscv/mm/fault.c
@@ -29,6 +29,7 @@

#include <asm/pgalloc.h>
#include <asm/ptrace.h>
+#include <asm/tlbflush.h>

/*
* This routine handles page faults. It determines the address and the
@@ -281,6 +282,18 @@ vmalloc_fault:
pte_k = pte_offset_kernel(pmd_k, addr);
if (!pte_present(*pte_k))
goto no_context;
+
+ /*
+ * The kernel assumes that TLBs don't cache invalid
+ * entries, but in RISC-V, SFENCE.VMA specifies an
+ * ordering constraint, not a cache flush; it is
+ * necessary even after writing invalid entries.
+ * Relying on flush_tlb_fix_spurious_fault would
+ * suffice, but the extra traps reduce
+ * performance. So, eagerly SFENCE.VMA.
+ */
+ local_flush_tlb_page(addr);
+
return;
}
}