Re: [PATCH v11 050/113] KVM: x86/tdp_mmu: Ignore unsupported mmu operation on private GFNs

From: Isaku Yamahata
Date: Mon Feb 27 2023 - 17:02:15 EST


On Fri, Feb 17, 2023 at 10:27:47AM +0200,
Zhi Wang <zhi.wang.linux@xxxxxxxxx> wrote:

> On Thu, 12 Jan 2023 08:31:58 -0800
> isaku.yamahata@xxxxxxxxx wrote:
>
> > From: Isaku Yamahata <isaku.yamahata@xxxxxxxxx>
> >
> > Some KVM MMU operations (dirty page logging, page migration, aging page)
> > aren't supported for private GFNs (yet) with the first generation of TDX.
> > Silently return on unsupported TDX KVM MMU operations.
> >
> > Signed-off-by: Isaku Yamahata <isaku.yamahata@xxxxxxxxx>
> > ---
> > arch/x86/kvm/mmu/mmu.c | 3 +++
> > arch/x86/kvm/mmu/tdp_mmu.c | 50 ++++++++++++++++++++++++++++++++++----
> > arch/x86/kvm/x86.c | 3 +++
> > 3 files changed, 51 insertions(+), 5 deletions(-)
> >
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index 484e615196aa..ad0482a101a3 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -6635,6 +6635,9 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm,
> > for_each_rmap_spte(rmap_head, &iter, sptep) {
> > sp = sptep_to_sp(sptep);
> >
> > + /* Private page dirty logging is not supported yet. */
> > + KVM_BUG_ON(is_private_sptep(sptep), kvm);
> > +
> > /*
> > * We cannot do huge page mapping for indirect shadow pages,
> > * which are found on the last rmap (level = 1) when not using
> > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> > index 5ce0328c71df..69e202bd1897 100644
> > --- a/arch/x86/kvm/mmu/tdp_mmu.c
> > +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> > @@ -1478,7 +1478,8 @@ typedef bool (*tdp_handler_t)(struct kvm *kvm, struct tdp_iter *iter,
> >
> > static __always_inline bool kvm_tdp_mmu_handle_gfn(struct kvm *kvm,
> > struct kvm_gfn_range *range,
> > - tdp_handler_t handler)
> > + tdp_handler_t handler,
> > + bool only_shared)
>
> What's the purpose of having only_shared while all the callers will set it as
> true?

I dropped only_shared argument.


> > {
> > struct kvm_mmu_page *root;
> > struct tdp_iter iter;
> > @@ -1489,9 +1490,23 @@ static __always_inline bool kvm_tdp_mmu_handle_gfn(struct kvm *kvm,
> > * into this helper allow blocking; it'd be dead, wasteful code.
> > */
> > for_each_tdp_mmu_root(kvm, root, range->slot->as_id) {
> > + gfn_t start;
> > + gfn_t end;
> > +
> > + if (only_shared && is_private_sp(root))
> > + continue;
> > +
> > rcu_read_lock();
> >
> > - tdp_root_for_each_leaf_pte(iter, root, range->start, range->end)
> > + /*
> > + * For TDX shared mapping, set GFN shared bit to the range,
> > + * so the handler() doesn't need to set it, to avoid duplicated
> > + * code in multiple handler()s.
> > + */
> > + start = kvm_gfn_for_root(kvm, root, range->start);
> > + end = kvm_gfn_for_root(kvm, root, range->end);
> > +
>
> The coco implementation tends to treat the SHARED bit / C bit as a page_prot,
> an attribute, not a part of the GFN. From that prospective, the caller needs to
> be aware if it is operating on the private memory or shared memory, so does
> the handler. The page table walker should know the SHARED bit as a attribute.
>
> I don't think it is a good idea to have two different understandings, which
> will cause conversion and confusion.

I think you're mixing how guest observes it (guest page table) with how
host/VMM manages it(EPT).
--
Isaku Yamahata <isaku.yamahata@xxxxxxxxx>