Re: "KVM: x86/mmu: Overhaul TDP MMU zapping and flushing" breaks SVM on Hyper-V

From: Sean Christopherson
Date: Wed Feb 15 2023 - 17:16:15 EST


On Tue, Feb 14, 2023, Jeremi Piotrowski wrote:
> On 13/02/2023 20:56, Paolo Bonzini wrote:
> > On Mon, Feb 13, 2023 at 8:12 PM Sean Christopherson <seanjc@xxxxxxxxxx> wrote:
> >>> Depending on the performance results of adding the hypercall to
> >>> svm_flush_tlb_current, the fix could indeed be to just disable usage of
> >>> HV_X64_NESTED_ENLIGHTENED_TLB.
> >>
> >> Minus making nested SVM (L3) mutually exclusive, I believe this will do the trick:
> >>
> >> + /* blah blah blah */
> >> + hv_flush_tlb_current(vcpu);
> >> +
> >
> > Yes, it's either this or disabling the feature.
> >
> > Paolo
>
> Combining the two sub-threads: both of the suggestions:
>
> a) adding a hyperv_flush_guest_mapping(__pa(root->spt) after kvm_tdp_mmu_get_vcpu_root_hpa's call to tdp_mmu_alloc_sp()
> b) adding a hyperv_flush_guest_mapping(vcpu->arch.mmu->root.hpa) to svm_flush_tlb_current()
>
> appear to work in my test case (L2 vm startup until panic due to missing rootfs).
>
> But in both these cases (and also when I completely disable HV_X64_NESTED_ENLIGHTENED_TLB)
> the runtime of an iteration of the test is noticeably longer compared to tdp_mmu=0.

Hmm, what is test doing?

> So in terms of performance the ranking is (fastest to slowest):
> 1. tdp_mmu=0 + enlightened TLB
> 2. tdp_mmu=0 + no enlightened TLB
> 3. tdp_mmu=1 (enlightened TLB makes minimal difference)