[PATCH 4.9 086/104] arm64: kasan: avoid bad virt_to_pfn()

From: Greg Kroah-Hartman
Date: Fri Oct 06 2017 - 04:55:46 EST


4.9-stable review patch. If anyone has any objections, please let me know.

------------------

From: Mark Rutland <mark.rutland@xxxxxxx>


[ Upstream commit b0de0ccc8b9edd8846828e0ecdc35deacdf186b0 ]

Booting a v4.11-rc1 kernel with DEBUG_VIRTUAL and KASAN enabled produces
the following splat (trimmed for brevity):

[ 0.000000] virt_to_phys used for non-linear address: ffff200008080000 (0xffff200008080000)
[ 0.000000] WARNING: CPU: 0 PID: 0 at arch/arm64/mm/physaddr.c:14 __virt_to_phys+0x48/0x70
[ 0.000000] PC is at __virt_to_phys+0x48/0x70
[ 0.000000] LR is at __virt_to_phys+0x48/0x70
[ 0.000000] Call trace:
[ 0.000000] [<ffff2000080b1ac0>] __virt_to_phys+0x48/0x70
[ 0.000000] [<ffff20000a03b86c>] kasan_init+0x1c0/0x498
[ 0.000000] [<ffff20000a034018>] setup_arch+0x2fc/0x948
[ 0.000000] [<ffff20000a030c68>] start_kernel+0xb8/0x570
[ 0.000000] [<ffff20000a0301e8>] __primary_switched+0x6c/0x74

This is because we use virt_to_pfn() on a kernel image address when
trying to figure out its nid, so that we can allocate its shadow from
the same node.

As with other recent changes, this patch uses lm_alias() to solve this.

We could instead use NUMA_NO_NODE, as x86 does for all shadow
allocations, though we'll likely want the "real" memory shadow to be
backed from its corresponding nid anyway, so we may as well be
consistent and find the nid for the image shadow.

Cc: Catalin Marinas <catalin.marinas@xxxxxxx>
Cc: Will Deacon <will.deacon@xxxxxxx>
Acked-by: Laura Abbott <labbott@xxxxxxxxxx>
Signed-off-by: Mark Rutland <mark.rutland@xxxxxxx>
Signed-off-by: Will Deacon <will.deacon@xxxxxxx>
Signed-off-by: Sasha Levin <alexander.levin@xxxxxxxxxxx>
Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>
---
arch/arm64/mm/kasan_init.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -153,7 +153,7 @@ void __init kasan_init(void)
clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);

vmemmap_populate(kimg_shadow_start, kimg_shadow_end,
- pfn_to_nid(virt_to_pfn(_text)));
+ pfn_to_nid(virt_to_pfn(lm_alias(_text))));

/*
* vmemmap_populate() has populated the shadow region that covers the