[PATCH 1/2] mm: Fix memory size alignment in devm_memremap_pages_release()

From: Jan H. SchÃnherr
Date: Wed Jan 17 2018 - 19:07:18 EST


The functions devm_memremap_pages() and devm_memremap_pages_release() use
different ways to calculate the section-aligned amount of memory. The
latter function may use an incorrect size if the memory region is small
but straddles a section border.

Use the same code for both.

Fixes: 5f29a77cd957 ("mm: fix mixed zone detection in devm_memremap_pages")
Signed-off-by: Jan H. SchÃnherr <jschoenh@xxxxxxxxx>
---
kernel/memremap.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/kernel/memremap.c b/kernel/memremap.c
index 403ab9c..4712ce6 100644
--- a/kernel/memremap.c
+++ b/kernel/memremap.c
@@ -301,7 +301,8 @@ static void devm_memremap_pages_release(struct device *dev, void *data)

/* pages are dead and unused, undo the arch mapping */
align_start = res->start & ~(SECTION_SIZE - 1);
- align_size = ALIGN(resource_size(res), SECTION_SIZE);
+ align_size = ALIGN(res->start + resource_size(res), SECTION_SIZE)
+ - align_start;

mem_hotplug_begin();
arch_remove_memory(align_start, align_size);
--
2.9.3.1.gcba166c.dirty