Re: [RFC 00/16] KVM protected memory extension

From: Liran Alon
Date: Tue May 26 2020 - 06:17:57 EST



On 26/05/2020 9:17, Mike Rapoport wrote:
On Mon, May 25, 2020 at 04:47:18PM +0300, Liran Alon wrote:
On 22/05/2020 15:51, Kirill A. Shutemov wrote:

Furthermore, I would like to point out that just unmapping guest data from
kernel direct-map is not sufficient to prevent all
guest-to-guest info-leaks via a kernel memory info-leak vulnerability. This
is because host kernel VA space have other regions
which contains guest sensitive data. For example, KVM per-vCPU struct (which
holds vCPU state) is allocated on slab and therefore
still leakable.
Objects allocated from slab use the direct map, vmalloc() is another story.
It doesn't matter. This patch series, like XPFO, only removes guest memory pages from direct-map.
Not things such as KVM per-vCPU structs. That's why Julian & Marius (AWS), created the "Process local kernel VA region" patch-series
that declare a single PGD entry, which maps a kernelspace region, to have different PFN between different tasks.
For more information, see my KVM Forum talk slides I gave in previous reply and related AWS patch-series:
https://patchwork.kernel.org/cover/10990403/

- Touching direct mapping leads to fragmentation. We need to be able to
recover from it. I have a buggy patch that aims at recovering 2M/1G page.
It has to be fixed and tested properly
As I've mentioned above, not mapping all guest memory from 1GB hugetlbfs
will lead to holes in kernel direct-map which force it to not be mapped
anymore as a series of 1GB huge-pages.
This have non-trivial performance cost. Thus, I am not sure addressing this
use-case is valuable.
Out of curiosity, do we actually have some numbers for the "non-trivial
performance cost"? For instance for KVM usecase?

Dig into XPFO mailing-list discussions to find out...
I just remember that this was one of the main concerns regarding XPFO.

-Liran