Re: [PATCH V2 2/2] mm/pgtable/debug: Add test validating architecture page table helpers

From: Christophe Leroy
Date: Thu Sep 12 2019 - 11:36:50 EST




Le 12/09/2019 Ã 17:00, Christophe Leroy a ÃcritÂ:


On 09/12/2019 06:02 AM, Anshuman Khandual wrote:
This adds a test module which will validate architecture page table helpers
and accessors regarding compliance with generic MM semantics expectations.
This will help various architectures in validating changes to the existing
page table helpers or addition of new ones.

Test page table and memory pages creating it's entries at various level are
all allocated from system memory with required alignments. If memory pages
with required size and alignment could not be allocated, then all depending
individual tests are skipped.

Build failure on powerpc book3s/32. This is because asm/highmem.h is missing. It can't be included from asm/book3s/32/pgtable.h because it creates circular dependency. So it has to be included from mm/arch_pgtable_test.c

In fact it is <linux/highmem.h> that needs to be added, adding <asm/highmem.h> directly provokes build failure at link time.

Christophe




 CC mm/arch_pgtable_test.o
In file included from ./arch/powerpc/include/asm/book3s/pgtable.h:8:0,
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ from ./arch/powerpc/include/asm/pgtable.h:18,
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ from ./include/linux/mm.h:99,
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ from ./arch/powerpc/include/asm/io.h:29,
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ from ./include/linux/io.h:13,
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ from ./include/linux/irq.h:20,
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ from ./arch/powerpc/include/asm/hardirq.h:6,
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ from ./include/linux/hardirq.h:9,
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ from ./include/linux/interrupt.h:11,
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ from ./include/linux/kernel_stat.h:9,
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ from ./include/linux/cgroup.h:26,
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ from ./include/linux/hugetlb.h:9,
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ from mm/arch_pgtable_test.c:14:
mm/arch_pgtable_test.c: In function 'arch_pgtable_tests_init':
./arch/powerpc/include/asm/book3s/32/pgtable.h:365:13: error: implicit declaration of function 'kmap_atomic' [-Werror=implicit-function-declaration]
 ((pte_t *)(kmap_atomic(pmd_page(*(dir))) + \
ÂÂÂÂÂÂÂÂÂÂÂÂ ^
./include/linux/mm.h:2008:31: note: in expansion of macro 'pte_offset_map'
 (pte_alloc(mm, pmd) ? NULL : pte_offset_map(pmd, address))
ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ ^
mm/arch_pgtable_test.c:377:9: note: in expansion of macro 'pte_alloc_map'
 ptep = pte_alloc_map(mm, pmdp, vaddr);
ÂÂÂÂÂÂÂÂ ^
cc1: some warnings being treated as errors
make[2]: *** [mm/arch_pgtable_test.o] Error 1


Christophe



Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
Cc: Vlastimil Babka <vbabka@xxxxxxx>
Cc: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>
Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Cc: Mike Rapoport <rppt@xxxxxxxxxxxxxxxxxx>
Cc: Jason Gunthorpe <jgg@xxxxxxxx>
Cc: Dan Williams <dan.j.williams@xxxxxxxxx>
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: Michal Hocko <mhocko@xxxxxxxxxx>
Cc: Mark Rutland <mark.rutland@xxxxxxx>
Cc: Mark Brown <broonie@xxxxxxxxxx>
Cc: Steven Price <Steven.Price@xxxxxxx>
Cc: Ard Biesheuvel <ard.biesheuvel@xxxxxxxxxx>
Cc: Masahiro Yamada <yamada.masahiro@xxxxxxxxxxxxx>
Cc: Kees Cook <keescook@xxxxxxxxxxxx>
Cc: Tetsuo Handa <penguin-kernel@xxxxxxxxxxxxxxxxxxx>
Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx>
Cc: Sri Krishna chowdary <schowdary@xxxxxxxxxx>
Cc: Dave Hansen <dave.hansen@xxxxxxxxx>
Cc: Russell King - ARM Linux <linux@xxxxxxxxxxxxxxx>
Cc: Michael Ellerman <mpe@xxxxxxxxxxxxxx>
Cc: Paul Mackerras <paulus@xxxxxxxxx>
Cc: Martin Schwidefsky <schwidefsky@xxxxxxxxxx>
Cc: Heiko Carstens <heiko.carstens@xxxxxxxxxx>
Cc: "David S. Miller" <davem@xxxxxxxxxxxxx>
Cc: Vineet Gupta <vgupta@xxxxxxxxxxxx>
Cc: James Hogan <jhogan@xxxxxxxxxx>
Cc: Paul Burton <paul.burton@xxxxxxxx>
Cc: Ralf Baechle <ralf@xxxxxxxxxxxxxx>
Cc: Kirill A. Shutemov <kirill@xxxxxxxxxxxxx>
Cc: Gerald Schaefer <gerald.schaefer@xxxxxxxxxx>
Cc: Christophe Leroy <christophe.leroy@xxxxxx>
Cc: linux-snps-arc@xxxxxxxxxxxxxxxxxxx
Cc: linux-mips@xxxxxxxxxxxxxxx
Cc: linux-arm-kernel@xxxxxxxxxxxxxxxxxxx
Cc: linux-ia64@xxxxxxxxxxxxxxx
Cc: linuxppc-dev@xxxxxxxxxxxxxxxx
Cc: linux-s390@xxxxxxxxxxxxxxx
Cc: linux-sh@xxxxxxxxxxxxxxx
Cc: sparclinux@xxxxxxxxxxxxxxx
Cc: x86@xxxxxxxxxx
Cc: linux-kernel@xxxxxxxxxxxxxxx

Suggested-by: Catalin Marinas <catalin.marinas@xxxxxxx>
Signed-off-by: Anshuman Khandual <anshuman.khandual@xxxxxxx>
---
 arch/x86/include/asm/pgtable_64_types.h | 2 +
 mm/Kconfig.debug | 14 +
 mm/Makefile | 1 +
 mm/arch_pgtable_test.c | 429 ++++++++++++++++++++++++
 4 files changed, 446 insertions(+)
 create mode 100644 mm/arch_pgtable_test.c

diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h
index 52e5f5f2240d..b882792a3999 100644
--- a/arch/x86/include/asm/pgtable_64_types.h
+++ b/arch/x86/include/asm/pgtable_64_types.h
@@ -40,6 +40,8 @@ static inline bool pgtable_l5_enabled(void)
 #define pgtable_l5_enabled() 0
 #endif /* CONFIG_X86_5LEVEL */
+#define mm_p4d_folded(mm) (!pgtable_l5_enabled())
+
 extern unsigned int pgdir_shift;
 extern unsigned int ptrs_per_p4d;
diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug
index 327b3ebf23bf..ce9c397f7b07 100644
--- a/mm/Kconfig.debug
+++ b/mm/Kconfig.debug
@@ -117,3 +117,17 @@ config DEBUG_RODATA_TEST
ÂÂÂÂÂ depends on STRICT_KERNEL_RWX
ÂÂÂÂÂ ---help---
ÂÂÂÂÂÂÂ This option enables a testcase for the setting rodata read-only.
+
+config DEBUG_ARCH_PGTABLE_TEST
+ÂÂÂ bool "Test arch page table helpers for semantics compliance"
+ÂÂÂ depends on MMU
+ÂÂÂ depends on DEBUG_KERNEL
+ÂÂÂ help
+ÂÂÂÂÂ This options provides a kernel module which can be used to test
+ÂÂÂÂÂ architecture page table helper functions on various platform in
+ÂÂÂÂÂ verifying if they comply with expected generic MM semantics. This
+ÂÂÂÂÂ will help architectures code in making sure that any changes or
+ÂÂÂÂÂ new additions of these helpers will still conform to generic MM
+ÂÂÂÂÂ expected semantics.
+
+ÂÂÂÂÂ If unsure, say N.
diff --git a/mm/Makefile b/mm/Makefile
index d996846697ef..bb572c5aa8c5 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -86,6 +86,7 @@ obj-$(CONFIG_HWPOISON_INJECT) += hwpoison-inject.o
 obj-$(CONFIG_DEBUG_KMEMLEAK) += kmemleak.o
 obj-$(CONFIG_DEBUG_KMEMLEAK_TEST) += kmemleak-test.o
 obj-$(CONFIG_DEBUG_RODATA_TEST) += rodata_test.o
+obj-$(CONFIG_DEBUG_ARCH_PGTABLE_TEST) += arch_pgtable_test.o
 obj-$(CONFIG_PAGE_OWNER) += page_owner.o
 obj-$(CONFIG_CLEANCACHE) += cleancache.o
 obj-$(CONFIG_MEMORY_ISOLATION) += page_isolation.o
diff --git a/mm/arch_pgtable_test.c b/mm/arch_pgtable_test.c
new file mode 100644
index 000000000000..8b4a92756ad8
--- /dev/null
+++ b/mm/arch_pgtable_test.c
@@ -0,0 +1,429 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * This kernel module validates architecture page table helpers &
+ * accessors and helps in verifying their continued compliance with
+ * generic MM semantics.
+ *
+ * Copyright (C) 2019 ARM Ltd.
+ *
+ * Author: Anshuman Khandual <anshuman.khandual@xxxxxxx>
+ */
+#define pr_fmt(fmt) "arch_pgtable_test: %s " fmt, __func__
+
+#include <linux/gfp.h>
+#include <linux/hugetlb.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/mman.h>
+#include <linux/mm_types.h>
+#include <linux/module.h>
+#include <linux/pfn_t.h>
+#include <linux/printk.h>
+#include <linux/random.h>
+#include <linux/spinlock.h>
+#include <linux/swap.h>
+#include <linux/swapops.h>
+#include <linux/sched/mm.h>
+#include <asm/pgalloc.h>
+#include <asm/pgtable.h>
+
+/*
+ * Basic operations
+ *
+ * mkold(entry)ÂÂÂÂÂÂÂÂÂÂÂ = An old and not a young entry
+ * mkyoung(entry)ÂÂÂÂÂÂÂ = A young and not an old entry
+ * mkdirty(entry)ÂÂÂÂÂÂÂ = A dirty and not a clean entry
+ * mkclean(entry)ÂÂÂÂÂÂÂ = A clean and not a dirty entry
+ * mkwrite(entry)ÂÂÂÂÂÂÂ = A write and not a write protected entry
+ * wrprotect(entry)ÂÂÂÂÂÂÂ = A write protected and not a write entry
+ * pxx_bad(entry)ÂÂÂÂÂÂÂ = A mapped and non-table entry
+ * pxx_same(entry1, entry2)ÂÂÂ = Both entries hold the exact same value
+ */
+#define VMFLAGSÂÂÂ (VM_READ|VM_WRITE|VM_EXEC)
+
+/*
+ * On s390 platform, the lower 12 bits are used to identify given page table
+ * entry type and for other arch specific requirements. But these bits might
+ * affect the ability to clear entries with pxx_clear(). So while loading up
+ * the entries skip all lower 12 bits in order to accommodate s390 platform.
+ * It does not have affect any other platform.
+ */
+#define RANDOM_ORVALUEÂÂÂ (0xfffffffffffff000UL)
+#define RANDOM_NZVALUEÂÂÂ (0xff)
+
+static bool pud_aligned;
+static bool pmd_aligned;
+
+static void pte_basic_tests(struct page *page, pgprot_t prot)
+{
+ÂÂÂ pte_t pte = mk_pte(page, prot);
+
+ÂÂÂ WARN_ON(!pte_same(pte, pte));
+ÂÂÂ WARN_ON(!pte_young(pte_mkyoung(pte)));
+ÂÂÂ WARN_ON(!pte_dirty(pte_mkdirty(pte)));
+ÂÂÂ WARN_ON(!pte_write(pte_mkwrite(pte)));
+ÂÂÂ WARN_ON(pte_young(pte_mkold(pte)));
+ÂÂÂ WARN_ON(pte_dirty(pte_mkclean(pte)));
+ÂÂÂ WARN_ON(pte_write(pte_wrprotect(pte)));
+}
+
+#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE
+static void pmd_basic_tests(struct page *page, pgprot_t prot)
+{
+ÂÂÂ pmd_t pmd;
+
+ÂÂÂ /*
+ÂÂÂÂ * Memory block here must be PMD_SIZE aligned. Abort this
+ÂÂÂÂ * test in case we could not allocate such a memory block.
+ÂÂÂÂ */
+ÂÂÂ if (!pmd_aligned) {
+ÂÂÂÂÂÂÂ pr_warn("Could not proceed with PMD tests\n");
+ÂÂÂÂÂÂÂ return;
+ÂÂÂ }
+
+ÂÂÂ pmd = mk_pmd(page, prot);
+ÂÂÂ WARN_ON(!pmd_same(pmd, pmd));
+ÂÂÂ WARN_ON(!pmd_young(pmd_mkyoung(pmd)));
+ÂÂÂ WARN_ON(!pmd_dirty(pmd_mkdirty(pmd)));
+ÂÂÂ WARN_ON(!pmd_write(pmd_mkwrite(pmd)));
+ÂÂÂ WARN_ON(pmd_young(pmd_mkold(pmd)));
+ÂÂÂ WARN_ON(pmd_dirty(pmd_mkclean(pmd)));
+ÂÂÂ WARN_ON(pmd_write(pmd_wrprotect(pmd)));
+ÂÂÂ /*
+ÂÂÂÂ * A huge page does not point to next level page table
+ÂÂÂÂ * entry. Hence this must qualify as pmd_bad().
+ÂÂÂÂ */
+ÂÂÂ WARN_ON(!pmd_bad(pmd_mkhuge(pmd)));
+}
+#else
+static void pmd_basic_tests(struct page *page, pgprot_t prot) { }
+#endif
+
+#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
+static void pud_basic_tests(struct page *page, pgprot_t prot)
+{
+ÂÂÂ pud_t pud;
+
+ÂÂÂ /*
+ÂÂÂÂ * Memory block here must be PUD_SIZE aligned. Abort this
+ÂÂÂÂ * test in case we could not allocate such a memory block.
+ÂÂÂÂ */
+ÂÂÂ if (!pud_aligned) {
+ÂÂÂÂÂÂÂ pr_warn("Could not proceed with PUD tests\n");
+ÂÂÂÂÂÂÂ return;
+ÂÂÂ }
+
+ÂÂÂ pud = pfn_pud(page_to_pfn(page), prot);
+ÂÂÂ WARN_ON(!pud_same(pud, pud));
+ÂÂÂ WARN_ON(!pud_young(pud_mkyoung(pud)));
+ÂÂÂ WARN_ON(!pud_write(pud_mkwrite(pud)));
+ÂÂÂ WARN_ON(pud_write(pud_wrprotect(pud)));
+ÂÂÂ WARN_ON(pud_young(pud_mkold(pud)));
+
+#if !defined(__PAGETABLE_PMD_FOLDED) && !defined(__ARCH_HAS_4LEVEL_HACK)
+ÂÂÂ /*
+ÂÂÂÂ * A huge page does not point to next level page table
+ÂÂÂÂ * entry. Hence this must qualify as pud_bad().
+ÂÂÂÂ */
+ÂÂÂ WARN_ON(!pud_bad(pud_mkhuge(pud)));
+#endif
+}
+#else
+static void pud_basic_tests(struct page *page, pgprot_t prot) { }
+#endif
+
+static void p4d_basic_tests(struct page *page, pgprot_t prot)
+{
+ÂÂÂ p4d_t p4d;
+
+ÂÂÂ memset(&p4d, RANDOM_NZVALUE, sizeof(p4d_t));
+ÂÂÂ WARN_ON(!p4d_same(p4d, p4d));
+}
+
+static void pgd_basic_tests(struct page *page, pgprot_t prot)
+{
+ÂÂÂ pgd_t pgd;
+
+ÂÂÂ memset(&pgd, RANDOM_NZVALUE, sizeof(pgd_t));
+ÂÂÂ WARN_ON(!pgd_same(pgd, pgd));
+}
+
+#if !defined(__PAGETABLE_PMD_FOLDED) && !defined(__ARCH_HAS_4LEVEL_HACK)
+static void pud_clear_tests(pud_t *pudp)
+{
+ÂÂÂ pud_t pud = READ_ONCE(*pudp);
+
+ÂÂÂ pud = __pud(pud_val(pud) | RANDOM_ORVALUE);
+ÂÂÂ WRITE_ONCE(*pudp, pud);
+ÂÂÂ pud_clear(pudp);
+ÂÂÂ pud = READ_ONCE(*pudp);
+ÂÂÂ WARN_ON(!pud_none(pud));
+}
+
+static void pud_populate_tests(struct mm_struct *mm, pud_t *pudp, pmd_t *pmdp)
+{
+ÂÂÂ pud_t pud;
+
+ÂÂÂ /*
+ÂÂÂÂ * This entry points to next level page table page.
+ÂÂÂÂ * Hence this must not qualify as pud_bad().
+ÂÂÂÂ */
+ÂÂÂ pmd_clear(pmdp);
+ÂÂÂ pud_clear(pudp);
+ÂÂÂ pud_populate(mm, pudp, pmdp);
+ÂÂÂ pud = READ_ONCE(*pudp);
+ÂÂÂ WARN_ON(pud_bad(pud));
+}
+#else
+static void pud_clear_tests(pud_t *pudp) { }
+static void pud_populate_tests(struct mm_struct *mm, pud_t *pudp, pmd_t *pmdp)
+{
+}
+#endif
+
+#if !defined(__PAGETABLE_PUD_FOLDED) && !defined(__ARCH_HAS_5LEVEL_HACK)
+static void p4d_clear_tests(p4d_t *p4dp)
+{
+ÂÂÂ p4d_t p4d = READ_ONCE(*p4dp);
+
+ÂÂÂ p4d = __p4d(p4d_val(p4d) | RANDOM_ORVALUE);
+ÂÂÂ WRITE_ONCE(*p4dp, p4d);
+ÂÂÂ p4d_clear(p4dp);
+ÂÂÂ p4d = READ_ONCE(*p4dp);
+ÂÂÂ WARN_ON(!p4d_none(p4d));
+}
+
+static void p4d_populate_tests(struct mm_struct *mm, p4d_t *p4dp, pud_t *pudp)
+{
+ÂÂÂ p4d_t p4d;
+
+ÂÂÂ /*
+ÂÂÂÂ * This entry points to next level page table page.
+ÂÂÂÂ * Hence this must not qualify as p4d_bad().
+ÂÂÂÂ */
+ÂÂÂ pud_clear(pudp);
+ÂÂÂ p4d_clear(p4dp);
+ÂÂÂ p4d_populate(mm, p4dp, pudp);
+ÂÂÂ p4d = READ_ONCE(*p4dp);
+ÂÂÂ WARN_ON(p4d_bad(p4d));
+}
+#else
+static void p4d_clear_tests(p4d_t *p4dp) { }
+static void p4d_populate_tests(struct mm_struct *mm, p4d_t *p4dp, pud_t *pudp)
+{
+}
+#endif
+
+#ifndef __ARCH_HAS_5LEVEL_HACK
+static void pgd_clear_tests(struct mm_struct *mm, pgd_t *pgdp)
+{
+ÂÂÂ pgd_t pgd = READ_ONCE(*pgdp);
+
+ÂÂÂ if (mm_p4d_folded(mm))
+ÂÂÂÂÂÂÂ return;
+
+ÂÂÂ pgd = __pgd(pgd_val(pgd) | RANDOM_ORVALUE);
+ÂÂÂ WRITE_ONCE(*pgdp, pgd);
+ÂÂÂ pgd_clear(pgdp);
+ÂÂÂ pgd = READ_ONCE(*pgdp);
+ÂÂÂ WARN_ON(!pgd_none(pgd));
+}
+
+static void pgd_populate_tests(struct mm_struct *mm, pgd_t *pgdp, p4d_t *p4dp)
+{
+ÂÂÂ pgd_t pgd;
+
+ÂÂÂ if (mm_p4d_folded(mm))
+ÂÂÂÂÂÂÂ return;
+
+ÂÂÂ /*
+ÂÂÂÂ * This entry points to next level page table page.
+ÂÂÂÂ * Hence this must not qualify as pgd_bad().
+ÂÂÂÂ */
+ÂÂÂ p4d_clear(p4dp);
+ÂÂÂ pgd_clear(pgdp);
+ÂÂÂ pgd_populate(mm, pgdp, p4dp);
+ÂÂÂ pgd = READ_ONCE(*pgdp);
+ÂÂÂ WARN_ON(pgd_bad(pgd));
+}
+#else
+static void pgd_clear_tests(struct mm_struct *mm, pgd_t *pgdp) { }
+static void pgd_populate_tests(struct mm_struct *mm, pgd_t *pgdp, p4d_t *p4dp)
+{
+}
+#endif
+
+static void pte_clear_tests(struct mm_struct *mm, pte_t *ptep)
+{
+ÂÂÂ pte_t pte = READ_ONCE(*ptep);
+
+ÂÂÂ pte = __pte(pte_val(pte) | RANDOM_ORVALUE);
+ÂÂÂ WRITE_ONCE(*ptep, pte);
+ÂÂÂ pte_clear(mm, 0, ptep);
+ÂÂÂ pte = READ_ONCE(*ptep);
+ÂÂÂ WARN_ON(!pte_none(pte));
+}
+
+static void pmd_clear_tests(pmd_t *pmdp)
+{
+ÂÂÂ pmd_t pmd = READ_ONCE(*pmdp);
+
+ÂÂÂ pmd = __pmd(pmd_val(pmd) | RANDOM_ORVALUE);
+ÂÂÂ WRITE_ONCE(*pmdp, pmd);
+ÂÂÂ pmd_clear(pmdp);
+ÂÂÂ pmd = READ_ONCE(*pmdp);
+ÂÂÂ WARN_ON(!pmd_none(pmd));
+}
+
+static void pmd_populate_tests(struct mm_struct *mm, pmd_t *pmdp,
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ pgtable_t pgtable)
+{
+ÂÂÂ pmd_t pmd;
+
+ÂÂÂ /*
+ÂÂÂÂ * This entry points to next level page table page.
+ÂÂÂÂ * Hence this must not qualify as pmd_bad().
+ÂÂÂÂ */
+ÂÂÂ pmd_clear(pmdp);
+ÂÂÂ pmd_populate(mm, pmdp, pgtable);
+ÂÂÂ pmd = READ_ONCE(*pmdp);
+ÂÂÂ WARN_ON(pmd_bad(pmd));
+}
+
+static struct page *alloc_mapped_page(void)
+{
+ÂÂÂ struct page *page;
+ÂÂÂ gfp_t gfp_mask = GFP_KERNEL | __GFP_ZERO;
+
+ÂÂÂ page = alloc_gigantic_page_order(get_order(PUD_SIZE), gfp_mask,
+ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ first_memory_node, &node_states[N_MEMORY]);
+ÂÂÂ if (page) {
+ÂÂÂÂÂÂÂ pud_aligned = true;
+ÂÂÂÂÂÂÂ pmd_aligned = true;
+ÂÂÂÂÂÂÂ return page;
+ÂÂÂ }
+
+ÂÂÂ page = alloc_pages(gfp_mask, get_order(PMD_SIZE));
+ÂÂÂ if (page) {
+ÂÂÂÂÂÂÂ pmd_aligned = true;
+ÂÂÂÂÂÂÂ return page;
+ÂÂÂ }
+ÂÂÂ return alloc_page(gfp_mask);
+}
+
+static void free_mapped_page(struct page *page)
+{
+ÂÂÂ if (pud_aligned) {
+ÂÂÂÂÂÂÂ unsigned long pfn = page_to_pfn(page);
+
+ÂÂÂÂÂÂÂ free_contig_range(pfn, 1ULL << get_order(PUD_SIZE));
+ÂÂÂÂÂÂÂ return;
+ÂÂÂ }
+
+ÂÂÂ if (pmd_aligned) {
+ÂÂÂÂÂÂÂ int order = get_order(PMD_SIZE);
+
+ÂÂÂÂÂÂÂ free_pages((unsigned long)page_address(page), order);
+ÂÂÂÂÂÂÂ return;
+ÂÂÂ }
+ÂÂÂ free_page((unsigned long)page_address(page));
+}
+
+static unsigned long get_random_vaddr(void)
+{
+ÂÂÂ unsigned long random_vaddr, random_pages, total_user_pages;
+
+ÂÂÂ total_user_pages = (TASK_SIZE - FIRST_USER_ADDRESS) / PAGE_SIZE;
+
+ÂÂÂ random_pages = get_random_long() % total_user_pages;
+ÂÂÂ random_vaddr = FIRST_USER_ADDRESS + random_pages * PAGE_SIZE;
+
+ÂÂÂ WARN_ON(random_vaddr > TASK_SIZE);
+ÂÂÂ WARN_ON(random_vaddr < FIRST_USER_ADDRESS);
+ÂÂÂ return random_vaddr;
+}
+
+static int __init arch_pgtable_tests_init(void)
+{
+ÂÂÂ struct mm_struct *mm;
+ÂÂÂ struct page *page;
+ÂÂÂ pgd_t *pgdp;
+ÂÂÂ p4d_t *p4dp, *saved_p4dp;
+ÂÂÂ pud_t *pudp, *saved_pudp;
+ÂÂÂ pmd_t *pmdp, *saved_pmdp, pmd;
+ÂÂÂ pte_t *ptep;
+ÂÂÂ pgtable_t saved_ptep;
+ÂÂÂ pgprot_t prot;
+ÂÂÂ unsigned long vaddr;
+
+ÂÂÂ prot = vm_get_page_prot(VMFLAGS);
+ÂÂÂ vaddr = get_random_vaddr();
+ÂÂÂ mm = mm_alloc();
+ÂÂÂ if (!mm) {
+ÂÂÂÂÂÂÂ pr_err("mm_struct allocation failed\n");
+ÂÂÂÂÂÂÂ return 1;
+ÂÂÂ }
+
+ÂÂÂ page = alloc_mapped_page();
+ÂÂÂ if (!page) {
+ÂÂÂÂÂÂÂ pr_err("memory allocation failed\n");
+ÂÂÂÂÂÂÂ return 1;
+ÂÂÂ }
+
+ÂÂÂ pgdp = pgd_offset(mm, vaddr);
+ÂÂÂ p4dp = p4d_alloc(mm, pgdp, vaddr);
+ÂÂÂ pudp = pud_alloc(mm, p4dp, vaddr);
+ÂÂÂ pmdp = pmd_alloc(mm, pudp, vaddr);
+ÂÂÂ ptep = pte_alloc_map(mm, pmdp, vaddr);
+
+ÂÂÂ /*
+ÂÂÂÂ * Save all the page table page addresses as the page table
+ÂÂÂÂ * entries will be used for testing with random or garbage
+ÂÂÂÂ * values. These saved addresses will be used for freeing
+ÂÂÂÂ * page table pages.
+ÂÂÂÂ */
+ÂÂÂ pmd = READ_ONCE(*pmdp);
+ÂÂÂ saved_p4dp = p4d_offset(pgdp, 0UL);
+ÂÂÂ saved_pudp = pud_offset(p4dp, 0UL);
+ÂÂÂ saved_pmdp = pmd_offset(pudp, 0UL);
+ÂÂÂ saved_ptep = pmd_pgtable(pmd);
+
+ÂÂÂ pte_basic_tests(page, prot);
+ÂÂÂ pmd_basic_tests(page, prot);
+ÂÂÂ pud_basic_tests(page, prot);
+ÂÂÂ p4d_basic_tests(page, prot);
+ÂÂÂ pgd_basic_tests(page, prot);
+
+ÂÂÂ pte_clear_tests(mm, ptep);
+ÂÂÂ pmd_clear_tests(pmdp);
+ÂÂÂ pud_clear_tests(pudp);
+ÂÂÂ p4d_clear_tests(p4dp);
+ÂÂÂ pgd_clear_tests(mm, pgdp);
+
+ÂÂÂ pmd_populate_tests(mm, pmdp, saved_ptep);
+ÂÂÂ pud_populate_tests(mm, pudp, saved_pmdp);
+ÂÂÂ p4d_populate_tests(mm, p4dp, saved_pudp);
+ÂÂÂ pgd_populate_tests(mm, pgdp, saved_p4dp);
+
+ÂÂÂ p4d_free(mm, saved_p4dp);
+ÂÂÂ pud_free(mm, saved_pudp);
+ÂÂÂ pmd_free(mm, saved_pmdp);
+ÂÂÂ pte_free(mm, saved_ptep);
+
+ÂÂÂ mm_dec_nr_puds(mm);
+ÂÂÂ mm_dec_nr_pmds(mm);
+ÂÂÂ mm_dec_nr_ptes(mm);
+ÂÂÂ __mmdrop(mm);
+
+ÂÂÂ free_mapped_page(page);
+ÂÂÂ return 0;
+}
+
+static void __exit arch_pgtable_tests_exit(void) { }
+
+module_init(arch_pgtable_tests_init);
+module_exit(arch_pgtable_tests_exit);
+
+MODULE_LICENSE("GPL v2");
+MODULE_AUTHOR("Anshuman Khandual <anshuman.khandual@xxxxxxx>");
+MODULE_DESCRIPTION("Test architecture page table helpers");