[tip: x86/urgent] x86/boot/64: Round memory hole size up to next PMD page

From: tip-bot2 for Steve Wahl
Date: Fri Oct 11 2019 - 14:30:14 EST


The following commit has been merged into the x86/urgent branch of tip:

Commit-ID: 1869dbe87cb94dc9a218ae1d9301dea3678bd4ff
Gitweb: https://git.kernel.org/tip/1869dbe87cb94dc9a218ae1d9301dea3678bd4ff
Author: Steve Wahl <steve.wahl@xxxxxxx>
AuthorDate: Tue, 24 Sep 2019 16:04:31 -05:00
Committer: Borislav Petkov <bp@xxxxxxx>
CommitterDate: Fri, 11 Oct 2019 18:47:23 +02:00

x86/boot/64: Round memory hole size up to next PMD page

The kernel image map is created using PMD pages, which can include
some extra space beyond what's actually needed. Round the size of the
memory hole we search for up to the next PMD boundary, to be certain
all of the space to be mapped is usable RAM and includes no reserved
areas.

Signed-off-by: Steve Wahl <steve.wahl@xxxxxxx>
Signed-off-by: Borislav Petkov <bp@xxxxxxx>
Acked-by: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx>
Acked-by: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx>
Cc: Baoquan He <bhe@xxxxxxxxxx>
Cc: Brijesh Singh <brijesh.singh@xxxxxxx>
Cc: dimitri.sivanich@xxxxxxx
Cc: Feng Tang <feng.tang@xxxxxxxxx>
Cc: "H. Peter Anvin" <hpa@xxxxxxxxx>
Cc: Ingo Molnar <mingo@xxxxxxxxxx>
Cc: Jordan Borgner <mail@xxxxxxxxxxxxxxxxx>
Cc: Juergen Gross <jgross@xxxxxxxx>
Cc: mike.travis@xxxxxxx
Cc: russ.anderson@xxxxxxx
Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Cc: x86-ml <x86@xxxxxxxxxx>
Cc: Zhenzhong Duan <zhenzhong.duan@xxxxxxxxxx>
Link: https://lkml.kernel.org/r/df4f49f05c0c27f108234eb93db5c613d09ea62e.1569358539.git.steve.wahl@xxxxxxx
---
arch/x86/boot/compressed/misc.c | 25 +++++++++++++++++++------
1 file changed, 19 insertions(+), 6 deletions(-)

diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
index 53ac0cb..9652d5c 100644
--- a/arch/x86/boot/compressed/misc.c
+++ b/arch/x86/boot/compressed/misc.c
@@ -345,6 +345,7 @@ asmlinkage __visible void *extract_kernel(void *rmode, memptr heap,
{
const unsigned long kernel_total_size = VO__end - VO__text;
unsigned long virt_addr = LOAD_PHYSICAL_ADDR;
+ unsigned long needed_size;

/* Retain x86 boot parameters pointer passed from startup_32/64. */
boot_params = rmode;
@@ -379,26 +380,38 @@ asmlinkage __visible void *extract_kernel(void *rmode, memptr heap,
free_mem_ptr = heap; /* Heap */
free_mem_end_ptr = heap + BOOT_HEAP_SIZE;

+ /*
+ * The memory hole needed for the kernel is the larger of either
+ * the entire decompressed kernel plus relocation table, or the
+ * entire decompressed kernel plus .bss and .brk sections.
+ *
+ * On X86_64, the memory is mapped with PMD pages. Round the
+ * size up so that the full extent of PMD pages mapped is
+ * included in the check against the valid memory table
+ * entries. This ensures the full mapped area is usable RAM
+ * and doesn't include any reserved areas.
+ */
+ needed_size = max(output_len, kernel_total_size);
+#ifdef CONFIG_X86_64
+ needed_size = ALIGN(needed_size, MIN_KERNEL_ALIGN);
+#endif
+
/* Report initial kernel position details. */
debug_putaddr(input_data);
debug_putaddr(input_len);
debug_putaddr(output);
debug_putaddr(output_len);
debug_putaddr(kernel_total_size);
+ debug_putaddr(needed_size);

#ifdef CONFIG_X86_64
/* Report address of 32-bit trampoline */
debug_putaddr(trampoline_32bit);
#endif

- /*
- * The memory hole needed for the kernel is the larger of either
- * the entire decompressed kernel plus relocation table, or the
- * entire decompressed kernel plus .bss and .brk sections.
- */
choose_random_location((unsigned long)input_data, input_len,
(unsigned long *)&output,
- max(output_len, kernel_total_size),
+ needed_size,
&virt_addr);

/* Validate memory location choices. */