Re: [PATCH v4 4/8] KVM: arm64: Add guard pages for pKVM (protected nVHE) hypervisor stack

From: Marc Zyngier
Date: Wed Mar 02 2022 - 02:58:18 EST


On Fri, 25 Feb 2022 03:34:49 +0000,
Kalesh Singh <kaleshsingh@xxxxxxxxxx> wrote:
>
> Maps the stack pages in the flexible private VA range and allocates
> guard pages below the stack as unbacked VA space. The stack is aligned
> to twice its size to aid overflow detection (implemented in a subsequent
> patch in the series).
>
> Signed-off-by: Kalesh Singh <kaleshsingh@xxxxxxxxxx>
> ---
>
> Changes in v4:
> - Replace IS_ERR_OR_NULL check with IS_ERR check now that
> pkvm_alloc_private_va_range() returns an error for null
> pointer, per Fuad
>
> Changes in v3:
> - Handle null ptr in IS_ERR_OR_NULL checks, per Mark
>
> arch/arm64/kvm/hyp/nvhe/setup.c | 25 +++++++++++++++++++++----
> 1 file changed, 21 insertions(+), 4 deletions(-)
>
> diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setup.c
> index 27af337f9fea..1b69a25c1861 100644
> --- a/arch/arm64/kvm/hyp/nvhe/setup.c
> +++ b/arch/arm64/kvm/hyp/nvhe/setup.c
> @@ -105,11 +105,28 @@ static int recreate_hyp_mappings(phys_addr_t phys, unsigned long size,
> if (ret)
> return ret;
>
> - end = (void *)per_cpu_ptr(&kvm_init_params, i)->stack_hyp_va;
> + /*
> + * Private mappings are allocated upwards from __io_map_base
> + * so allocate the guard page first then the stack.
> + */
> + start = (void *)pkvm_alloc_private_va_range(PAGE_SIZE, PAGE_SIZE);
> + if (IS_ERR(start))
> + return PTR_ERR(start);
> +
> + /*
> + * The stack is aligned to twice its size to facilitate overflow
> + * detection.
> + */
> + end = (void *)per_cpu_ptr(&kvm_init_params, i)->stack_pa;
> start = end - PAGE_SIZE;
> - ret = pkvm_create_mappings(start, end, PAGE_HYP);
> - if (ret)
> - return ret;
> + start = (void *)__pkvm_create_private_mapping((phys_addr_t)start,
> + PAGE_SIZE, PAGE_SIZE * 2, PAGE_HYP);

Similar comments as the previous patch. I'd rather you treat each
stack as a two-page VA, populated by a single page. It would be a lot
clearer, and less fragile.

> + if (IS_ERR(start))
> + return PTR_ERR(start);
> + end = start + PAGE_SIZE;
> +
> + /* Update stack_hyp_va to end of the stack's private VA range */
> + per_cpu_ptr(&kvm_init_params, i)->stack_hyp_va = (unsigned long) end;
> }
>
> /*

Thanks,

M.

--
Without deviation from the norm, progress is not possible.