Re: [PATCH] KVM: arm64: Fix unaligned addr case in mmu walking

From: Marc Zyngier
Date: Wed Mar 03 2021 - 10:28:39 EST


Hi Jia,

On Wed, 03 Mar 2021 02:42:25 +0000,
Jia He <justin.he@xxxxxxx> wrote:
>
> If the start addr is not aligned with the granule size of that level.
> loop step size should be adjusted to boundary instead of simple
> kvm_granual_size(level) increment. Otherwise, some mmu entries might miss
> the chance to be walked through.
> E.g. Assume the unmap range [data->addr, data->end] is
> [0xff00ab2000,0xff00cb2000] in level 2 walking and NOT block mapping.

When does this occur? Upgrade from page mappings to block? Swap out?

> And the 1st part of that pmd entry is [0xff00ab2000,0xff00c00000]. The
> pmd value is 0x83fbd2c1002 (not valid entry). In this case, data->addr
> should be adjusted to 0xff00c00000 instead of 0xff00cb2000.

Let me see if I understand this. Assuming 4k pages, the region
described above spans *two* 2M entries:

(a) ff00ab2000-ff00c00000, part of ff00a00000-ff00c00000
(b) ff00c00000-ff00db2000, part of ff00c00000-ff00e00000

(a) has no valid mapping, but (b) does. Because we fail to correctly
align on a block boundary when skipping (a), we also skip (b), which
is then left mapped.

Did I get it right? If so, yes, this is... annoying.

Understanding the circumstances this triggers in would be most
interesting. This current code seems to assume that we get ranges
aligned to mapping boundaries, but I seem to remember that the old
code did use the stage2_*_addr_end() helpers to deal with this case.

Will: I don't think things have changed in that respect, right?

>
> Without this fix, userspace "segment fault" error can be easily
> triggered by running simple gVisor runsc cases on an Ampere Altra
> server:
> docker run --runtime=runsc -it --rm ubuntu /bin/bash
>
> In container:
> for i in `seq 1 100`;do ls;done

The workload on its own isn't that interesting. What I'd like to
understand is what happens on the host during that time.

>
> Reported-by: Howard Zhang <Howard.Zhang@xxxxxxx>
> Signed-off-by: Jia He <justin.he@xxxxxxx>
> ---
> arch/arm64/kvm/hyp/pgtable.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c
> index bdf8e55ed308..4d99d07c610c 100644
> --- a/arch/arm64/kvm/hyp/pgtable.c
> +++ b/arch/arm64/kvm/hyp/pgtable.c
> @@ -225,6 +225,7 @@ static inline int __kvm_pgtable_visit(struct kvm_pgtable_walk_data *data,
> goto out;
>
> if (!table) {
> + data->addr = ALIGN_DOWN(data->addr, kvm_granule_size(level));
> data->addr += kvm_granule_size(level);
> goto out;
> }

It otherwise looks good to me. Quentin, Will: unless you object to
this, I plan to take it in the next round of fixes with

Fixes: b1e57de62cfb ("KVM: arm64: Add stand-alone page-table walker infrastructure")
Cc: stable@xxxxxxxxxxxxxxx

Thanks,

M.

--
Without deviation from the norm, progress is not possible.