Re: [PATCH V2 6/8] kvm: x86/mmu: Remove FNAME(invlpg)

From: Sean Christopherson
Date: Thu Feb 09 2023 - 20:11:12 EST


On Tue, Feb 07, 2023, Lai Jiangshan wrote:
> Use FNAME(sync_spte) to share the code which has a slight semantics
> changed: clean vTLB entry is kept.

...

> +static void __kvm_mmu_invalidate_gva(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
> + gva_t gva, hpa_t root_hpa)
> +{
> + struct kvm_shadow_walk_iterator iterator;
> +
> + vcpu_clear_mmio_info(vcpu, gva);
> +
> + write_lock(&vcpu->kvm->mmu_lock);
> + for_each_shadow_entry_using_root(vcpu, root_hpa, gva, iterator) {
> + struct kvm_mmu_page *sp = sptep_to_sp(iterator.sptep);
> +
> + if (sp->unsync && *iterator.sptep) {

Please make the !0 change in a separate patch. It took me a while to connect the
dots, and to also understand what I suspect is a major motivation: sync_spte()
already has this check, i.e. the change is happening regardless, so might as well
avoid the indirect branch.

> + gfn_t gfn = kvm_mmu_page_get_gfn(sp, iterator.index);
> + int ret = mmu->sync_spte(vcpu, sp, iterator.index);
> +
> + if (ret < 0)
> + mmu_page_zap_pte(vcpu->kvm, sp, iterator.sptep, NULL);
> + if (ret)
> + kvm_flush_remote_tlbs_with_address(vcpu->kvm, gfn, 1);

Why open code kvm_flush_remote_tlbs_sptep()? Does it actually shave enough
cycles to be visible?

If open coding is really justified, can you rebase on one of the two branches?
And then change this to kvm_flush_remote_tlbs_gfn().

https://github.com/kvm-x86/linux/tree/next
https://github.com/kvm-x86/linux/tree/mmu