Re: [PATCH RFC v1 0/9] KVM: SVM: Defer page pinning for SEV guests

From: Nikunj A. Dadhania
Date: Fri Apr 01 2022 - 12:14:44 EST




On 4/1/2022 8:24 PM, Sean Christopherson wrote:
> On Fri, Apr 01, 2022, Nikunj A. Dadhania wrote:
>>
>> On 4/1/2022 12:30 AM, Sean Christopherson wrote:
>>> On Thu, Mar 31, 2022, Peter Gonda wrote:
>>>> On Wed, Mar 30, 2022 at 10:48 PM Nikunj A. Dadhania <nikunj@xxxxxxx> wrote:
>>>>> So with guest supporting KVM_FEATURE_HC_MAP_GPA_RANGE and host (KVM) supporting
>>>>> KVM_HC_MAP_GPA_RANGE hypercall, SEV/SEV-ES guest should communicate private/shared
>>>>> pages to the hypervisor, this information can be used to mark page shared/private.
>>>>
>>>> One concern here may be that the VMM doesn't know which guests have
>>>> KVM_FEATURE_HC_MAP_GPA_RANGE support and which don't. Only once the
>>>> guest boots does the guest tell KVM that it supports
>>>> KVM_FEATURE_HC_MAP_GPA_RANGE. If the guest doesn't we need to pin all
>>>> the memory before we run the guest to be safe to be safe.
>>>
>>> Yep, that's a big reason why I view purging the existing SEV memory management as
>>> a long term goal. The other being that userspace obviously needs to be updated to
>>> support UPM[*]. I suspect the only feasible way to enable this for SEV/SEV-ES
>>> would be to restrict it to new VM types that have a disclaimer regarding additional
>>> requirements.
>>
>> For SEV/SEV-ES could we base demand pinning on my first RFC[*].
>
> No, because as David pointed out, elevating the refcount is not the same as actually
> pinning the page. Things like NUMA balancing will still try to migrate the page,
> and even go so far as to zap the PTE, before bailing due to the outstanding reference.
> In other words, not actually pinning makes the mm subsystem less efficient. Would it
> functionally work? Yes. Is it acceptable KVM behavior? No.
>
>> Those patches does not touch the core KVM flow.
>
> I don't mind touching core KVM code. If this goes forward, I actually strongly
> prefer having the x86 MMU code handle the pinning as opposed to burying it in SEV
> via kvm_x86_ops. The reason I don't think it's worth pursuing this approach is
> because (a) we know that the current SEV/SEV-ES memory management scheme is flawed
> and is a deadend, and (b) this is not so trivial as we (or at least I) originally
> thought/hoped it would be. In other words, it's not that I think demand pinning
> is a bad idea, nor do I think the issues are unsolvable, it's that I think the
> cost of getting a workable solution, e.g. code churn, ongoing maintenance, reviewer
> time, etc..., far outweighs the benefits.

Point noted Sean, will focus on the UPM effort.

Regards
Nikunj