Re: [PATCH v4] arm64: mm: fix linear mem mapping access performance degradation

From: Catalin Marinas
Date: Tue Jul 05 2022 - 11:34:23 EST


On Tue, Jul 05, 2022 at 06:02:02PM +0300, Mike Rapoport wrote:
> +void __init remap_crashkernel(void)
> +{
> +#ifdef CONFIG_KEXEC_CORE
> + phys_addr_t start, end, size;
> + phys_addr_t aligned_start, aligned_end;
> +
> + if (can_set_direct_map() || IS_ENABLED(CONFIG_KFENCE))
> + return;
> +
> + if (!crashk_res.end)
> + return;
> +
> + start = crashk_res.start & PAGE_MASK;
> + end = PAGE_ALIGN(crashk_res.end);
> +
> + aligned_start = ALIGN_DOWN(crashk_res.start, PUD_SIZE);
> + aligned_end = ALIGN(end, PUD_SIZE);
> +
> + /* Clear PUDs containing crash kernel memory */
> + unmap_hotplug_range(__phys_to_virt(aligned_start),
> + __phys_to_virt(aligned_end), false, NULL);

What I don't understand is what happens if there's valid kernel data
between aligned_start and crashk_res.start (or the other end of the
range).

--
Catalin