Re: [RFC PATCH 0/8] KVM: x86/mmu: Introduce pinned SPTEs framework

From: Brijesh Singh
Date: Mon Aug 03 2020 - 11:52:12 EST


Thanks for series Sean. Some thoughts


On 7/31/20 4:23 PM, Sean Christopherson wrote:
> SEV currently needs to pin guest memory as it doesn't support migrating
> encrypted pages. Introduce a framework in KVM's MMU to support pinning
> pages on demand without requiring additional memory allocations, and with
> (somewhat hazy) line of sight toward supporting more advanced features for
> encrypted guest memory, e.g. host page migration.


Eric's attempt to do a lazy pinning suffers with the memory allocation
problem and your series seems to address it. As you have noticed,
currently the SEV enablement  in the KVM does not support migrating the
encrypted pages. But the recent SEV firmware provides a support to
migrate the encrypted pages (e.g host page migration). The support is
available in SEV FW >= 0.17.

> The idea is to use a software available bit in the SPTE to track that a
> page has been pinned. The decision to pin a page and the actual pinning
> managment is handled by vendor code via kvm_x86_ops hooks. There are
> intentionally two hooks (zap and unzap) introduced that are not needed for
> SEV. I included them to again show how the flag (probably renamed?) could
> be used for more than just pin/unpin.

If using the available software bits for the tracking the pinning is
acceptable then it can be used for the non-SEV guests (if needed). I
will look through your patch more carefully but one immediate question,
when do we unpin the pages? In the case of the SEV, once a page is
pinned then it should not be unpinned until the guest terminates. If we
unpin the page before the VM terminates then there is a  chance the host
page migration will kick-in and move the pages. The KVM MMU code may
call to drop the spte's during the zap/unzap and this happens a lot
during a guest execution and it will lead us to the path where a vendor
specific code will unpin the pages during the guest execution and cause
a data corruption for the SEV guest.

> Bugs in the core implementation are pretty much guaranteed. The basic
> concept has been tested, but in a fairly different incarnation. Most
> notably, tagging PRESENT SPTEs as PINNED has not been tested, although
> using the PINNED flag to track zapped (and known to be pinned) SPTEs has
> been tested. I cobbled this variation together fairly quickly to get the
> code out there for discussion.
>
> The last patch to pin SEV pages during sev_launch_update_data() is
> incomplete; it's there to show how we might leverage MMU-based pinning to
> support pinning pages before the guest is live.


I will add the SEV specific bits and  give this a try.

>
> Sean Christopherson (8):
> KVM: x86/mmu: Return old SPTE from mmu_spte_clear_track_bits()
> KVM: x86/mmu: Use bits 2:0 to check for present SPTEs
> KVM: x86/mmu: Refactor handling of not-present SPTEs in mmu_set_spte()
> KVM: x86/mmu: Add infrastructure for pinning PFNs on demand
> KVM: SVM: Use the KVM MMU SPTE pinning hooks to pin pages on demand
> KVM: x86/mmu: Move 'pfn' variable to caller of direct_page_fault()
> KVM: x86/mmu: Introduce kvm_mmu_map_tdp_page() for use by SEV
> KVM: SVM: Pin SEV pages in MMU during sev_launch_update_data()
>
> arch/x86/include/asm/kvm_host.h | 7 ++
> arch/x86/kvm/mmu.h | 3 +
> arch/x86/kvm/mmu/mmu.c | 186 +++++++++++++++++++++++++-------
> arch/x86/kvm/mmu/paging_tmpl.h | 3 +-
> arch/x86/kvm/svm/sev.c | 141 +++++++++++++++++++++++-
> arch/x86/kvm/svm/svm.c | 3 +
> arch/x86/kvm/svm/svm.h | 3 +
> 7 files changed, 302 insertions(+), 44 deletions(-)
>