Re: [PATCH v3 2/3] KVM: X86: Implement PV sched yield hypercall

From: Wanpeng Li
Date: Tue Jun 11 2019 - 04:50:59 EST


On Mon, 10 Jun 2019 at 22:17, Radim KrÄmÃÅ <rkrcmar@xxxxxxxxxx> wrote:
>
> 2019-05-30 09:05+0800, Wanpeng Li:
> > From: Wanpeng Li <wanpengli@xxxxxxxxxxx>
> >
> > The target vCPUs are in runnable state after vcpu_kick and suitable
> > as a yield target. This patch implements the sched yield hypercall.
> >
> > 17% performance increasement of ebizzy benchmark can be observed in an
> > over-subscribe environment. (w/ kvm-pv-tlb disabled, testing TLB flush
> > call-function IPI-many since call-function is not easy to be trigged
> > by userspace workload).
> >
> > Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx>
> > Cc: Radim KrÄmÃÅ <rkrcmar@xxxxxxxxxx>
> > Cc: Liran Alon <liran.alon@xxxxxxxxxx>
> > Signed-off-by: Wanpeng Li <wanpengli@xxxxxxxxxxx>
> > ---
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > @@ -7172,6 +7172,28 @@ void kvm_vcpu_deactivate_apicv(struct kvm_vcpu *vcpu)
> > kvm_x86_ops->refresh_apicv_exec_ctrl(vcpu);
> > }
> >
> > +static void kvm_sched_yield(struct kvm *kvm, unsigned long dest_id)
> > +{
> > + struct kvm_vcpu *target = NULL;
> > + struct kvm_apic_map *map = NULL;
> > +
> > + rcu_read_lock();
> > + map = rcu_dereference(kvm->arch.apic_map);
> > +
> > + if (unlikely(!map) || dest_id > map->max_apic_id)
> > + goto out;
> > +
> > + if (map->phys_map[dest_id]->vcpu) {
>
> This should check for map->phys_map[dest_id].

Yeah, make a mistake here.

>
> > + target = map->phys_map[dest_id]->vcpu;
> > + rcu_read_unlock();
> > + kvm_vcpu_yield_to(target);
> > + }
> > +
> > +out:
> > + if (!target)
> > + rcu_read_unlock();
>
> Also, I find the following logic clearer
>
> {
> struct kvm_vcpu *target = NULL;
> struct kvm_apic_map *map;
>
> rcu_read_lock();
> map = rcu_dereference(kvm->arch.apic_map);
>
> if (likely(map) && dest_id <= map->max_apic_id && map->phys_map[dest_id])
> target = map->phys_map[dest_id]->vcpu;
>
> rcu_read_unlock();
>
> if (target)
> kvm_vcpu_yield_to(target);
> }

More better, thanks.

Regards,
Wanpeng Li