Re: [PATCH] vmap(): don't allow invalid pages

From: Robin Murphy
Date: Wed Jan 19 2022 - 08:28:24 EST


On 2022-01-18 23:52, Yury Norov wrote:
vmap() takes struct page *pages as one of arguments, and user may provide
an invalid pointer which would lead to DABT at address translation later.

Currently, kernel checks the pages against NULL. In my case, however, the
address was not NULL, and was big enough so that the hardware generated
Address Size Abort on arm64.

Interestingly, this abort happens even if copy_from_kernel_nofault() is
used, which is quite inconvenient for debugging purposes.

This patch adds a pfn_valid() check into vmap() path, so that invalid
mapping will not be created.

RFC: https://lkml.org/lkml/2022/1/18/815
v1: use pfn_valid() instead of adding an arch-specific
arch_vmap_page_valid(). Thanks to Matthew Wilcox for the hint.

Signed-off-by: Yury Norov <yury.norov@xxxxxxxxx>
---
mm/vmalloc.c | 2 ++
1 file changed, 2 insertions(+)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index d2a00ad4e1dd..a4134ee56b10 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -477,6 +477,8 @@ static int vmap_pages_pte_range(pmd_t *pmd, unsigned long addr,
return -EBUSY;
if (WARN_ON(!page))
return -ENOMEM;
+ if (WARN_ON(!pfn_valid(page_to_pfn(page))))

Is it page_to_pfn() guaranteed to work without blowing up if page is invalid in the first place? Looking at the CONFIG_SPARSEMEM case I'm not sure that's true...

Robin.

+ return -EINVAL;
set_pte_at(&init_mm, addr, pte, mk_pte(page, prot));
(*nr)++;
} while (pte++, addr += PAGE_SIZE, addr != end);