Re: [PATCH] implement flush_cache_vmap and flush_cache_vunmap for RISC-V

From: Palmer Dabbelt
Date: Sun Apr 11 2021 - 17:41:10 EST


On Sun, 28 Mar 2021 18:55:09 PDT (-0700), liu@xxxxxxxxxx wrote:
This patch implements flush_cache_vmap and flush_cache_vunmap for
RISC-V, since these functions might modify PTE. Without this patch,
SFENCE.VMA won't be added to related codes, which might introduce a bug
in some out-of-order micro-architecture implementations.

Signed-off-by: Jiuyang Liu <liu@xxxxxxxxxx>
---
arch/riscv/include/asm/cacheflush.h | 8 ++++++++
1 file changed, 8 insertions(+)

diff --git a/arch/riscv/include/asm/cacheflush.h b/arch/riscv/include/asm/cacheflush.h
index 23ff70350992..4adf25248c43 100644
--- a/arch/riscv/include/asm/cacheflush.h
+++ b/arch/riscv/include/asm/cacheflush.h
@@ -8,6 +8,14 @@

#include <linux/mm.h>

+/*
+ * flush_cache_vmap and flush_cache_vunmap might modify PTE, needs SFENCE.VMA.
+ * - flush_cache_vmap is invoked after map_kernel_range() has installed the page table entries.
+ * - flush_cache_vunmap is invoked before unmap_kernel_range() deletes the page table entries

These should have line breaks.

+ */
+#define flush_cache_vmap(start, end) flush_tlb_all()

We shouldn't need cache flushes for permission upgrades: the ISA allows the old mappings to be visible until a fence, but the theory is that window will be sort for reasonable architectures so the overhead of flushing the entire TLB will overwhelm the extra faults. There are a handful of places where we preemptively flush, but those are generally because we can't handle the faults correctly.

If you have some benchmark that demonstrates a performance issue on real hardware here then I'm happy to talk about this further, but this assumption is all over arch/riscv so I'd prefer to keep things consistent for now.

+#define flush_cache_vunmap(start, end) flush_tlb_all()

This one does seem necessary.

+
static inline void local_flush_icache_all(void)
{
asm volatile ("fence.i" ::: "memory");