[PATCH v1 2/2] mm/cma.c: Delete kmemleak objects when freeing CMA areas to buddy at boot

From: Isaac J. Manjarres
Date: Mon Jan 09 2023 - 17:16:50 EST


Since every CMA region is now tracked by kmemleak at the time
cma_activate_area() is invoked, and cma_activate_area() is called
for each CMA region, invoke kmemleak_free_part_phys() during
cma_activate_area() to inform kmemleak that the CMA region will
be freed. Doing so also removes the need to invoke
kmemleak_ignore_phys() when the global CMA region is being created,
as the kmemleak object for it will be deleted.

This helps resolve a crash when kmemleak and CONFIG_DEBUG_PAGEALLOC
are both enabled, since CONFIG_DEBUG_PAGEALLOC causes the CMA region
to be unmapped from the kernel's address space when the pages are freed
to buddy. Without this patch, kmemleak will attempt to scan the CMA
regions, even though they are unmapped, which leads to a page-fault.

Cc: stable@xxxxxxxxxxxxxxx
Signed-off-by: Isaac J. Manjarres <isaacmanjarres@xxxxxxxxxx>
---
mm/cma.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/mm/cma.c b/mm/cma.c
index 674b7fdd563e..dd25b095d9ca 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -102,6 +102,13 @@ static void __init cma_activate_area(struct cma *cma)
if (!cma->bitmap)
goto out_error;

+ /*
+ * The CMA region was marked as allocated by kmemleak when it was either
+ * dynamically allocated or statically reserved. In any case,
+ * inform kmemleak that the region is about to be freed to the page allocator.
+ */
+ kmemleak_free_part_phys(cma_get_base(cma), cma_get_size(cma));
+
/*
* alloc_contig_range() requires the pfn range specified to be in the
* same zone. Simplify by forcing the entire CMA resv range to be in the
@@ -361,11 +368,6 @@ int __init cma_declare_contiguous_nid(phys_addr_t base,
}
}

- /*
- * kmemleak scans/reads tracked objects for pointers to other
- * objects but this address isn't mapped and accessible
- */
- kmemleak_ignore_phys(addr);
base = addr;
}

--
2.39.0.314.g84b9a713c41-goog