On 15/06/2025 08:32, Mike Rapoport wrote:
On Fri, Jun 13, 2025 at 07:13:51PM +0530, Dev Jain wrote:We don't have a lock today, using apply_to_page_range(); we are expecting that
-/*x86 has a cpa_lock for set_memory/set_direct_map to ensure that there's on
- * This function assumes that the range is mapped with PAGE_SIZE pages.
- */
-static int __change_memory_common(unsigned long start, unsigned long size,
+static int ___change_memory_common(unsigned long start, unsigned long size,
pgprot_t set_mask, pgprot_t clear_mask)
{
struct page_change_data data;
@@ -61,9 +140,28 @@ static int __change_memory_common(unsigned long start, unsigned long size,
data.set_mask = set_mask;
data.clear_mask = clear_mask;
- ret = apply_to_page_range(&init_mm, start, size, change_page_range,
- &data);
+ arch_enter_lazy_mmu_mode();
+
+ /*
+ * The caller must ensure that the range we are operating on does not
+ * partially overlap a block mapping. Any such case should either not
+ * exist, or must be eliminated by splitting the mapping - which for
+ * kernel mappings can be done only on BBML2 systems.
+ *
+ */
+ ret = walk_kernel_page_table_range_lockless(start, start + size,
+ &pageattr_ops, NULL, &data);
concurrency in kernel page table updates. I think arm64 has to have such
lock as well.
the caller has exclusive ownership of the portion of virtual memory - i.e. the
vmalloc region or linear map. So I don't think this patch changes that requirement?
Where it does get a bit more hairy is when we introduce the support for
splitting. In that case, 2 non-overlapping areas of virtual memory may share a
large leaf mapping that needs to be split. But I've been discussing that with
Yang Shi at [1] and I think we can handle that locklessly too.
Perhaps I'm misunderstanding something?
[1] https://lore.kernel.org/all/f036acea-1bd1-48a7-8600-75ddd504b8db@xxxxxxx/
Thanks,
Ryan
+ arch_leave_lazy_mmu_mode();
+
+ return ret;
+}