Since apply_to_page_range does not support operations on block mappings,
use the generic pagewalk API to enable changing permissions for kernel
block mappings. This paves the path for enabling huge mappings by default
on kernel space mappings, thus leading to more efficient TLB usage.
We only require that the start and end of a given range lie on leaf mapping
boundaries. Return EINVAL in case a partial block mapping is detected; add
a corresponding comment in ___change_memory_common() to warn that
eliminating such a condition is the responsibility of the caller.
apply_to_page_range ultimately uses the lazy MMU hooks at the pte level
function (apply_to_pte_range) - we want to use this functionality after
this patch too. Ryan says:
"The only reason we traditionally confine the lazy mmu mode to a single
page table is because we want to enclose it within the PTL. But that
requirement doesn't stand for kernel mappings. As long as the walker can
guarantee that it doesn't allocate any memory (because with certain debug
settings that can cause lazy mmu nesting) or try to sleep then I think we
can just bracket the entire call."
Therefore, wrap the call to walk_kernel_page_table_range() with the
lazy MMU helpers.
Signed-off-by: Dev Jain <dev.jain@xxxxxxx>
---
arch/arm64/mm/pageattr.c | 158 +++++++++++++++++++++++++++++++--------
1 file changed, 126 insertions(+), 32 deletions(-)
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index 04d4a8f676db..2c118c0922ef 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -8,6 +8,7 @@
#include <linux/mem_encrypt.h>
#include <linux/sched.h>
#include <linux/vmalloc.h>
+#include <linux/pagewalk.h>
#include <asm/cacheflush.h>
#include <asm/pgtable-prot.h>
@@ -20,6 +21,100 @@ struct page_change_data {
pgprot_t clear_mask;
};