Re: [PATCH v3 20/28] KVM: x86/mmu: Allow yielding when zapping GFNs for defunct TDP MMU root

From: Sean Christopherson
Date: Tue Mar 01 2022 - 14:43:29 EST


On Tue, Mar 01, 2022, Paolo Bonzini wrote:
> On 2/26/22 01:15, Sean Christopherson wrote:
> > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> > index 3031b42c27a6..b838cfa984ad 100644
> > --- a/arch/x86/kvm/mmu/tdp_mmu.c
> > +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> > @@ -91,21 +91,66 @@ void kvm_tdp_mmu_put_root(struct kvm *kvm, struct kvm_mmu_page *root,
> > WARN_ON(!root->tdp_mmu_page);
> > - spin_lock(&kvm->arch.tdp_mmu_pages_lock);
> > - list_del_rcu(&root->link);
> > - spin_unlock(&kvm->arch.tdp_mmu_pages_lock);
> > + /*
> > + * Ensure root->role.invalid is read after the refcount reaches zero to
> > + * avoid zapping the root multiple times, e.g. if a different task
> > + * acquires a reference (after the root was marked invalid) and puts
> > + * the last reference, all while holding mmu_lock for read. Pairs
> > + * with the smp_mb__before_atomic() below.
> > + */
> > + smp_mb__after_atomic();
> > +
> > + /*
> > + * Free the root if it's already invalid. Invalid roots must be zapped
> > + * before their last reference is put, i.e. there's no work to be done,
> > + * and all roots must be invalidated (see below) before they're freed.
> > + * Re-zapping invalid roots would put KVM into an infinite loop (again,
> > + * see below).
> > + */
> > + if (root->role.invalid) {
> > + spin_lock(&kvm->arch.tdp_mmu_pages_lock);
> > + list_del_rcu(&root->link);
> > + spin_unlock(&kvm->arch.tdp_mmu_pages_lock);
> > +
> > + call_rcu(&root->rcu_head, tdp_mmu_free_sp_rcu_callback);
> > + return;
> > + }
> > +
> > + /*
> > + * Invalidate the root to prevent it from being reused by a vCPU, and
> > + * so that KVM doesn't re-zap the root when its last reference is put
> > + * again (see above).
> > + */
> > + root->role.invalid = true;
> > +
> > + /*
> > + * Ensure role.invalid is visible if a concurrent reader acquires a
> > + * reference after the root's refcount is reset. Pairs with the
> > + * smp_mb__after_atomic() above.
> > + */
> > + smp_mb__before_atomic();
>
> I have reviewed the series and I only have very minor comments... but this
> part is beyond me. The lavish comments don't explain what is an
> optimization and what is a requirement,

Ah, they're all requirements, but the invalid part also optimizes the case where
a root was marked invalid before its last reference was was ever put.

What I really meant to refer to by "zapping" was the entire sequence of restoring
the refcount to '1', zapping the root, and recursively re-dropping that ref. Avoiding
that "zap" is a requirement, otherwise KVM would get stuck in an infinite loop.

> and after spending quite some time I wonder if all this should just be
>
> if (refcount_dec_not_one(&root->tdp_mmu_root_count))
> return;
>
> if (!xchg(&root->role.invalid, true) {

The refcount being '1' means there's another task currently using root, marking
the root invalid will mean checks on the root's validity are non-deterministic
for the other task.

> tdp_mmu_zap_root(kvm, root, shared);
>
> /*
> * Do not assume the refcount is still 1: because
> * tdp_mmu_zap_root can yield, a different task
> * might have grabbed a reference to this root.
> *
> if (refcount_dec_not_one(&root->tdp_mmu_root_count))

This is wrong, _this_ task can't drop a reference taken by the other task.

> return;
> }
>
> /*
> * The root is invalid, and its reference count has reached
> * zero. It must have been zapped either in the "if" above or
> * by someone else, and we're definitely the last thread to see
> * it apart from RCU-protected page table walks.
> */
> refcount_set(&root->tdp_mmu_root_count, 0);

Not sure what you intended here, KVM should never force a refcount to '0'.

> spin_lock(&kvm->arch.tdp_mmu_pages_lock);
> list_del_rcu(&root->link);
> spin_unlock(&kvm->arch.tdp_mmu_pages_lock);
>
> call_rcu(&root->rcu_head, tdp_mmu_free_sp_rcu_callback);
>
> (Yay for xchg's implicit memory barriers)

xchg() is a very good idea. The smp_mb_*() stuff was carried over from the previous
version where this sequence set another flag in addition to role.invalid.

Is this less funky (untested)?

/*
* Invalidate the root to prevent it from being reused by a vCPU while
* the root is being zapped, i.e. to allow yielding while zapping the
* root (see below).
*
* Free the root if it's already invalid. Invalid roots must be zapped
* before their last reference is put, i.e. there's no work to be done,
* and all roots must be invalidated before they're freed (this code).
* Re-zapping invalid roots would put KVM into an infinite loop.
*
* Note, xchg() provides an implicit barrier to ensure role.invalid is
* visible if a concurrent reader acquires a reference after the root's
* refcount is reset.
*/
if (xchg(root->role.invalid, true))
spin_lock(&kvm->arch.tdp_mmu_pages_lock);
list_del_rcu(&root->link);
spin_unlock(&kvm->arch.tdp_mmu_pages_lock);

call_rcu(&root->rcu_head, tdp_mmu_free_sp_rcu_callback);
return;
}