Re: [PATCH] Fix unsynchronized access to sev members through svm_register_enc_region

From: Tom Lendacky
Date: Tue Jan 26 2021 - 22:18:58 EST


On 1/26/21 12:54 PM, Peter Gonda wrote:
> sev_pin_memory assumes that callers hold the kvm->lock. This was true for
> all callers except svm_register_enc_region since it does not originate
> from svm_mem_enc_op. Also added lockdep annotation to help prevent
> future regressions.

I'm not exactly sure what the problem is that your fixing? What is the
symptom that you're seeing?

>
> Tested: Booted SEV enabled VM on host.
>
> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
> Cc: Ingo Molnar <mingo@xxxxxxxxxx>
> Cc: "H. Peter Anvin" <hpa@xxxxxxxxx>
> Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx>
> Cc: Joerg Roedel <joro@xxxxxxxxxx>
> Cc: Tom Lendacky <thomas.lendacky@xxxxxxx>
> Cc: Brijesh Singh <brijesh.singh@xxxxxxx>
> Cc: Sean Christopherson <seanjc@xxxxxxxxxx>
> Cc: x86@xxxxxxxxxx
> Cc: kvm@xxxxxxxxxxxxxxx
> Cc: stable@xxxxxxxxxxxxxxx
> Cc: linux-kernel@xxxxxxxxxxxxxxx
> Fixes: 116a2214c5173 (KVM: SVM: Pin guest memory when SEV is active)

I can't find this commit. The Linux and KVM trees have it as:

1e80fdc09d12 ("KVM: SVM: Pin guest memory when SEV is active")

> Signed-off-by: Peter Gonda <pgonda@xxxxxxxxxx>
>
> ---
> arch/x86/kvm/svm.c | 16 +++++++++-------

This patch won't apply, as it has already been a few releases since svm.c
was moved to the arch/x86/kvm/svm directory and this function now lives in
sev.c.

Thanks,
Tom

> 1 file changed, 9 insertions(+), 7 deletions(-)
>
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index afdc5b44fe9f..9884e57f3d0f 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -1699,6 +1699,8 @@ static struct page **sev_pin_memory(struct kvm *kvm, unsigned long uaddr,
> struct page **pages;
> unsigned long first, last;
>
> + lockdep_assert_held(&kvm->lock);
> +
> if (ulen == 0 || uaddr + ulen < uaddr)
> return NULL;
>
> @@ -7228,12 +7230,19 @@ static int svm_register_enc_region(struct kvm *kvm,
> if (!region)
> return -ENOMEM;
>
> + mutex_lock(&kvm->lock);
> region->pages = sev_pin_memory(kvm, range->addr, range->size, &region->npages, 1);
> if (!region->pages) {
> ret = -ENOMEM;
> goto e_free;
> }
>
> + region->uaddr = range->addr;
> + region->size = range->size;
> +
> + list_add_tail(&region->list, &sev->regions_list);
> + mutex_unlock(&kvm->lock);
> +
> /*
> * The guest may change the memory encryption attribute from C=0 -> C=1
> * or vice versa for this memory range. Lets make sure caches are
> @@ -7242,13 +7251,6 @@ static int svm_register_enc_region(struct kvm *kvm,
> */
> sev_clflush_pages(region->pages, region->npages);
>
> - region->uaddr = range->addr;
> - region->size = range->size;
> -
> - mutex_lock(&kvm->lock);
> - list_add_tail(&region->list, &sev->regions_list);
> - mutex_unlock(&kvm->lock);
> -
> return ret;
>
> e_free:
>