RE: [PATCH v10 10/16] KVM: x86: Introduce KVM_GET_SHARED_PAGES_LIST ioctl

From: Kalra, Ashish
Date: Thu Feb 18 2021 - 14:25:45 EST


[AMD Public Use]


-----Original Message-----
From: Sean Christopherson <seanjc@xxxxxxxxxx>
Sent: Tuesday, February 16, 2021 7:03 PM
To: Kalra, Ashish <Ashish.Kalra@xxxxxxx>
Cc: pbonzini@xxxxxxxxxx; tglx@xxxxxxxxxxxxx; mingo@xxxxxxxxxx; hpa@xxxxxxxxx; rkrcmar@xxxxxxxxxx; joro@xxxxxxxxxx; bp@xxxxxxx; Lendacky, Thomas <Thomas.Lendacky@xxxxxxx>; x86@xxxxxxxxxx; kvm@xxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx; srutherford@xxxxxxxxxx; venu.busireddy@xxxxxxxxxx; Singh, Brijesh <brijesh.singh@xxxxxxx>
Subject: Re: [PATCH v10 10/16] KVM: x86: Introduce KVM_GET_SHARED_PAGES_LIST ioctl

On Thu, Feb 04, 2021, Ashish Kalra wrote:
> From: Brijesh Singh <brijesh.singh@xxxxxxx>
>
> The ioctl is used to retrieve a guest's shared pages list.

>What's the performance hit to boot time if KVM_HC_PAGE_ENC_STATUS is passed through to userspace? That way, userspace could manage the set of pages >in whatever data structure they want, and these get/set ioctls go away.

I will be more concerned about performance hit during guest DMA I/O if the page encryption status hypercalls are passed through to user-space,
a lot of guest DMA I/O dynamically sets up pages for encryption and then flips them at DMA completion, so guest I/O will surely take a performance
hit with this pass-through stuff.

Thanks,
Ashish