[PATCH 5.0 21/89] arm64: mm: Ensure tail of unaligned initrd is reserved

From: Greg Kroah-Hartman
Date: Tue Apr 30 2019 - 07:56:47 EST


From: Bjorn Andersson <bjorn.andersson@xxxxxxxxxx>

commit d4d18e3ec6091843f607e8929a56723e28f393a6 upstream.

In the event that the start address of the initrd is not aligned, but
has an aligned size, the base + size will not cover the entire initrd
image and there is a chance that the kernel will corrupt the tail of the
image.

By aligning the end of the initrd to a page boundary and then
subtracting the adjusted start address the memblock reservation will
cover all pages that contains the initrd.

Fixes: c756c592e442 ("arm64: Utilize phys_initrd_start/phys_initrd_size")
Cc: stable@xxxxxxxxxxxxxxx
Acked-by: Will Deacon <will.deacon@xxxxxxx>
Signed-off-by: Bjorn Andersson <bjorn.andersson@xxxxxxxxxx>
Signed-off-by: Catalin Marinas <catalin.marinas@xxxxxxx>
Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>

---
arch/arm64/mm/init.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -406,7 +406,7 @@ void __init arm64_memblock_init(void)
* Otherwise, this is a no-op
*/
u64 base = phys_initrd_start & PAGE_MASK;
- u64 size = PAGE_ALIGN(phys_initrd_size);
+ u64 size = PAGE_ALIGN(phys_initrd_start + phys_initrd_size) - base;

/*
* We can only add back the initrd memory if we don't end up