Re: [PATCH v2] x86: Skip WBINVD instruction for VM guest

From: Thomas Gleixner
Date: Wed Nov 24 2021 - 19:42:31 EST


Kuppuswamy,

On Thu, Nov 18 2021 at 20:03, Kuppuswamy Sathyanarayanan wrote:
> ACPI mandates that CPU caches be flushed before entering any sleep
> state. This ensures that the CPU and its caches can be powered down
> without losing data.
>
> ACPI-based VMs have maintained this sleep-state-entry behavior.
> However, cache flushing for VM sleep state entry is useless. Unlike on
> bare metal, guest sleep states are not correlated with potential data
> loss of any kind; the host is responsible for data preservation. In
> fact, some KVM configurations simply skip the cache flushing
> instruction (see need_emulate_wbinvd()).

KVM starts out with kvm->arch.noncoherent_dma_count = 0 which makes
need_emulate_wbinvd() skip WBINVD emulation. So far so good.

VFIO has code to invoke kvm_arch_register_noncoherent_dma() which
increments the count which will subsequently cause WBINVD emulation to
be enabled. What now?

> Further, on TDX systems, the WBINVD instruction causes an
> unconditional #VE exception. If this cache flushing remained, it would
> need extra code in the form of a #VE handler.
>
> All use of ACPI_FLUSH_CPU_CACHE() appears to be in sleep-state-related
> code.

C3 is considered a sleep state nowadays? Also ACPI_FLUSH_CPU_CACHE() is
used in other places which have nothing to do with sleep states.

git grep is not rocket science to use.

> This means that the ACPI use of WBINVD is at *best* superfluous.

Really? You probably meant to say:

This means that the ACPI usage of WBINVD from within a guest is at
best superfluous.

No?

But aside of that this does not give any reasonable answers why
disabling WBINVD for guests unconditionally in ACPI_FLUSH_CPU_CACHE()
and the argumentation vs. need_emulate_wbinvd() are actually correct
under all circumstances.

I'm neither going to do that analysis nor am I going to accept a patch
which comes with 'appears' based arguments and some handwavy references
to disabled WBINVD emulation code which can obviously be enabled for a
reason.

The even more interesting question for me is how a TDX guest is dealing
with all other potential invocations of WBINVD all over the place. Are
they all going to get the same treatment or are those magically going to
be never executed in TDX guests?

I really have to ask why SEV can deal with WBINVD and other things just
nicely by implementing trivial #VC handler functions, but TDX has to
prematurely optimize the kernel tree based on half baken arguments?

Having a few trivial #VE handlers is not the end of the world. You can
revisit that once basic support for TDX is merged in order to gain
performance or whatever.

Either that or you provide patches with arguments which are based on
proper analysis and not on 'appears to' observations.

Thanks,

tglx