Re: [PATCH v3] arm64: mm: fix linear mapping mem access performance degradation

From: guanghui.fgh
Date: Sat Jul 02 2022 - 07:08:03 EST


Thanks.

在 2022/7/2 1:24, Catalin Marinas 写道:
On Thu, Jun 30, 2022 at 06:50:22PM +0800, Guanghui Feng wrote:
+static void init_pmd_remap(pud_t *pudp, unsigned long addr, unsigned long end,
+ phys_addr_t phys, pgprot_t prot,
+ phys_addr_t (*pgtable_alloc)(int), int flags)
+{
+ unsigned long next;
+ pmd_t *pmdp;
+ phys_addr_t map_offset;
+ pmdval_t pmdval;
+
+ pmdp = pmd_offset(pudp, addr);
+ do {
+ next = pmd_addr_end(addr, end);
+
+ if (!pmd_none(*pmdp) && pmd_sect(*pmdp)) {
+ phys_addr_t pte_phys = pgtable_alloc(PAGE_SHIFT);
+ pmd_clear(pmdp);
+ pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN;
+ if (flags & NO_EXEC_MAPPINGS)
+ pmdval |= PMD_TABLE_PXN;
+ __pmd_populate(pmdp, pte_phys, pmdval);
+ flush_tlb_kernel_range(addr, addr + PAGE_SIZE);

This doesn't follow the architecture requirements for "break before
make" when changing live page tables. While it may work, it risks
triggering a TLB conflict abort. The correct sequence normally is:

pmd_clear();
flush_tlb_kernel_range();
__pmd_populate();

However, do we have any guarantees that the kernel doesn't access the
pmd range being unmapped temporarily? The page table itself might live
in one of these sections, so set_pmd() etc. can get a translation fault.
Thanks.
1. When reserving and remapping mem, there is only one boot cpu running, no other cpu/thread/process running.
At the same time, only the boot cpu remap and modify linear mem mapping when there is no cpu access the same linear mapped mem(the boot cpu is rebuilding it, and other cpu have't been booted).

2.Because the kernel image and linear mem mapping are splited in two method: map_kernel and map_mem. When rebuilding the linear mem mapping(mapped by map_mem), there is no effect to the kernle image mapping.

As a result, I thins there is no effect to the linear mem mapping and kernel image mapping.