Re: [PATCH 0/5] Rewrite Motorola MMU page-table layout

From: Greg Ungerer
Date: Fri Jan 31 2020 - 08:14:14 EST


Hi Will,

On 31/1/20 7:38 pm, Will Deacon wrote:
On Fri, Jan 31, 2020 at 04:31:48PM +1000, Greg Ungerer wrote:
On 29/1/20 8:39 pm, Peter Zijlstra wrote:
In order to faciliate Will's READ_ONCE() patches:

https://lkml.kernel.org/r/20200123153341.19947-1-will@xxxxxxxxxx

we need to fix m68k/motorola to not have a giant pmd_t. These patches do so and
are tested using ARAnyM/68040.

It would be very good if someone can either test or tell us what emulator to
use for 020/030.

This series breaks compilation for the ColdFire (with MMU) variant of
the m68k family:

[...]

Easy to reproduce. Build for the m5475evb_defconfig.

I've hacked up a fix below, but I don't know how to test whether it actually
works (it does fix the build).

Yep, I can confirm that too.
There is no emulators for the MMU based ColdFires (qemu only supports
a non-MMU variant).

I can test on real hardware - but not until Monday when I am back
in my lab. I'll report back then.


However, I also notice that building for
m5475evb_defconfig with vanilla v5.5 triggers this scary looking warning
due to a mismatch between the pgd size and the (8k!) page size:


| In function 'pgd_alloc.isra.111',
| inlined from 'mm_alloc_pgd' at kernel/fork.c:634:12,
| inlined from 'mm_init.isra.112' at kernel/fork.c:1043:6:
| ./arch/m68k/include/asm/string.h:72:25: warning: '__builtin_memcpy' forming offset [4097, 8192] is out of the bounds [0, 4096] of object 'kernel_pg_dir' with type 'pgd_t[1024]' {aka 'struct <anonymous>[1024]'} [-Warray-bounds]
| #define memcpy(d, s, n) __builtin_memcpy(d, s, n)
| ^~~~~~~~~~~~~~~~~~~~~~~~~
| ./arch/m68k/include/asm/mcf_pgalloc.h:93:2: note: in expansion of macro 'memcpy'
| memcpy(new_pgd, swapper_pg_dir, PAGE_SIZE);
| ^~~~~~


I think the correct fix is to add this:


diff --git a/arch/m68k/include/asm/mcf_pgalloc.h b/arch/m68k/include/asm/mcf_pgalloc.h
index 82ec54c2eaa4..c335e6a381a1 100644
--- a/arch/m68k/include/asm/mcf_pgalloc.h
+++ b/arch/m68k/include/asm/mcf_pgalloc.h
@@ -90,7 +90,7 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm)
new_pgd = (pgd_t *)__get_free_page(GFP_DMA | __GFP_NOWARN);
if (!new_pgd)
return NULL;
- memcpy(new_pgd, swapper_pg_dir, PAGE_SIZE);
+ memcpy(new_pgd, swapper_pg_dir, PTRS_PER_PGD * sizeof(pgd_t));
memset(new_pgd, 0, PAGE_OFFSET >> PGDIR_SHIFT);
return new_pgd;
}


but maybe it should be done as a separate patch give that it's not caused
by the rework we've been doing.

Indeed I hadn't noticed that before. But good idea, a separate patch
would make sense.

Regards
Greg


Will

--->8

diff --git a/arch/m68k/include/asm/mcf_pgalloc.h b/arch/m68k/include/asm/mcf_pgalloc.h
index 82ec54c2eaa4..955d54a6e973 100644
--- a/arch/m68k/include/asm/mcf_pgalloc.h
+++ b/arch/m68k/include/asm/mcf_pgalloc.h
@@ -28,21 +28,22 @@ extern inline pmd_t *pmd_alloc_kernel(pgd_t *pgd, unsigned long address)
return (pmd_t *) pgd;
}
-#define pmd_populate(mm, pmd, page) (pmd_val(*pmd) = \
- (unsigned long)(page_address(page)))
+#define pmd_populate(mm, pmd, pte) (pmd_val(*pmd) = (unsigned long)(pte))
-#define pmd_populate_kernel(mm, pmd, pte) (pmd_val(*pmd) = (unsigned long)(pte))
+#define pmd_populate_kernel pmd_populate
-#define pmd_pgtable(pmd) pmd_page(pmd)
+#define pmd_pgtable(pmd) pfn_to_virt(pmd_val(pmd) >> PAGE_SHIFT)
-static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t page,
+static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pgtable,
unsigned long address)
{
+ struct page *page = virt_to_page(pgtable);
+
pgtable_pte_page_dtor(page);
__free_page(page);
}
-static inline struct page *pte_alloc_one(struct mm_struct *mm)
+static inline pgtable_t pte_alloc_one(struct mm_struct *mm)
{
struct page *page = alloc_pages(GFP_DMA, 0);
pte_t *pte;
@@ -54,20 +55,19 @@ static inline struct page *pte_alloc_one(struct mm_struct *mm)
return NULL;
}
- pte = kmap(page);
- if (pte) {
- clear_page(pte);
- __flush_page_to_ram(pte);
- flush_tlb_kernel_page(pte);
- nocache_page(pte);
- }
- kunmap(page);
+ pte = page_address(page);
+ clear_page(pte);
+ __flush_page_to_ram(pte);
+ flush_tlb_kernel_page(pte);
+ nocache_page(pte);
- return page;
+ return pte;
}
-static inline void pte_free(struct mm_struct *mm, struct page *page)
+static inline void pte_free(struct mm_struct *mm, pgtable_t pgtable)
{
+ struct page *page = virt_to_page(pgtable);
+
pgtable_pte_page_dtor(page);
__free_page(page);
}