Re: [PATCH v1.1 1/2] x86/sev: Use per-CPU PSC structure in prep for unaccepted memory support

From: Tom Lendacky
Date: Wed Aug 03 2022 - 18:17:17 EST


On 8/3/22 16:48, Dave Hansen wrote:
On 8/3/22 14:34, Tom Lendacky wrote:
Also, private<->shared page conversions are *NOT* common from what I can
tell.  There are a few pages converted at boot, but most host the
guest<->host communications are through the swiotlb pages which are
static.

Generally, that's true. But, e.g., a dma_alloc_coherent() actually
doesn't go through SWIOTLB, but instead allocates the pages and makes
them shared, which results in a page state change. The NVMe driver was
calling that API a lot. In this case, though, the NVMe driver was
running in IRQ context and set_memory_decrypted() could sleep, so an
unencrypted DMA memory pool was created to work around the sleeping
issue and reduce the page state changes. It's just things like that,
that make me wary.

Interesting. Is that a real passthrough NVMe device or the hypervisor
presenting a virtual one that just happens to use the NVMe driver?

Hmmm... not sure, possibly the latter. I just knew that whatever it was, the NVMe driver was being used.

Thanks,
Tom


I'm pretty sure the TDX folks have been banking on having very few page
state changes. But, part of that at least is their expectation of
relying heavily on virtio.

I wonder if their expectations are accurate, or if once TDX gets out
into the real world if their hopes will be dashed.