Re: [PATCH] selftests/mm: pagemap_scan ioctl: add PFN ZERO test cases

From: David Hildenbrand
Date: Tue Jul 01 2025 - 10:52:16 EST


On 30.06.25 12:24, Muhammad Usama Anjum wrote:
Add test cases to test the correctness of PFN ZERO flag of pagemap_scan
ioctl. Test with normal pages backed memory and huge pages backed
memory.

Just to verify: would this trigger on kernels before my fix?


Cc: David Hildenbrand <david@xxxxxxxxxx>
Signed-off-by: Muhammad Usama Anjum <usama.anjum@xxxxxxxxxxxxx>
---
The bug has been fixed [1].

[1] https://lore.kernel.org/all/20250617143532.2375383-1-david@xxxxxxxxxx
---
tools/testing/selftests/mm/pagemap_ioctl.c | 57 +++++++++++++++++++++-
1 file changed, 56 insertions(+), 1 deletion(-)

diff --git a/tools/testing/selftests/mm/pagemap_ioctl.c b/tools/testing/selftests/mm/pagemap_ioctl.c
index 57b4bba2b45f3..6138de0087edf 100644
--- a/tools/testing/selftests/mm/pagemap_ioctl.c
+++ b/tools/testing/selftests/mm/pagemap_ioctl.c
@@ -1,4 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
+
#define _GNU_SOURCE
#include <stdio.h>
#include <fcntl.h>
@@ -1480,6 +1481,57 @@ static void transact_test(int page_size)
extra_thread_faults);
}
+void zeropfn_tests(void)
+{
+ unsigned long long mem_size;
+ struct page_region vec;
+ int i, ret;
+ char *mem;
+
+ /* Test with page backed memory */

What is "page backed memory" ? :)

+ mem_size = 10 * page_size;
+ mem = mmap(NULL, mem_size, PROT_READ, MAP_PRIVATE | MAP_ANON, -1, 0);
+ if (mem == MAP_FAILED)
+ ksft_exit_fail_msg("error nomem\n");
+
+ /* Touch each page to ensure it's mapped */
+ for (i = 0; i < mem_size; i += page_size)
+ (void)((volatile char *)mem)[i];
+
+ ret = pagemap_ioctl(mem, mem_size, &vec, 1, 0,
+ (mem_size / page_size), PAGE_IS_PFNZERO, 0, 0, PAGE_IS_PFNZERO);
+ if (ret < 0)
+ ksft_exit_fail_msg("error %d %d %s\n", ret, errno, strerror(errno));
+
+ ksft_test_result(ret == 1 && LEN(vec) == (mem_size / page_size),
+ "%s all pages must have PFNZERO set\n", __func__);
+
+ munmap(mem, mem_size);
+
+ /* Test with huge page */
+ mem_size = 10 * hpage_size;
+ mem = memalign(hpage_size, mem_size);
+ if (!mem)
+ ksft_exit_fail_msg("error nomem\n");
+
+ ret = madvise(mem, mem_size, MADV_HUGEPAGE);
+ if (ret)
+ ksft_exit_fail_msg("madvise failed %d %s\n", errno, strerror(errno));

Might fail on older kernels, so we usually treat this as a skip.

+
+ for (i = 0; i < mem_size; i += hpage_size)
+ (void)((volatile char *)mem)[i];
+
+ ret = pagemap_ioctl(mem, mem_size, &vec, 1, 0,
+ (mem_size / page_size), PAGE_IS_PFNZERO, 0, 0, PAGE_IS_PFNZERO);
+ if (ret < 0)
+ ksft_exit_fail_msg("error %d %d %s\n", ret, errno, strerror(errno));
+
+ ksft_test_result(ret == 1 && LEN(vec) == (mem_size / page_size),
+ "%s all huge pages must have PFNZERO set\n", __func__);

Wouldn't this be able to fail if /sys/kernel/mm/transparent_hugepage/use_zero_page is set to false, or if mmap() gave us a suboptimally-aligned range?

You'd have to read each and every page to get the ordinary shared zeropage in these configs instead without making the test too complicated.

+
+ free(mem);


Shouldn't this be an munmap() ?

+}
+
int main(int __attribute__((unused)) argc, char *argv[])
{
int shmid, buf_size, fd, i, ret;
@@ -1494,7 +1546,7 @@ int main(int __attribute__((unused)) argc, char *argv[])
if (init_uffd())
ksft_exit_pass();
- ksft_set_plan(115);
+ ksft_set_plan(117);

We should probably look into converting this test to kselftest_harness.

--
Cheers,

David / dhildenb