Re: [GIT PULL] x86/cpu for v5.17

From: Dave Hansen
Date: Mon Jan 10 2022 - 15:10:17 EST


On 1/10/22 10:35, Borislav Petkov wrote:

Not a big deal, I just thought I'd mention it since I reacted to it.
And we don't seem to have those vendor checks in any of the other code
that uses MSR_CSTAR (just grepping for that and seeing it mentioned in
kvm code etc)
Right, the only point for doing the vendor check I see here is, well,
because it is Intel who doesn't have CSTAR, let's check for Intel. But
yeah, we do avoid the vendor checks if it can be helped.

We can do a synthetic X86_FEATURE flag but that would be a waste. So the
_safe thing and keep the comment sounds optimal to me.

I can whip up a patch ontop if people agree.

There are four basic options here for TDX:

1. Paper over the #VE in the #VE handler itself
2. Do a check for TDX at the wrmsr(MSR_CSTAR) site
3. Do a check for X86_VENDOR_INTEL at the wrmsr(MSR_CSTAR) site
4. Use wrmsr*_safe() and rely on #VE -> fixup_exception()

TDX originally did #1, passed over #2 and settled on #3 because of a comment:

It's an obvious optimization (avoiding the WRMSR with a
conditional) without TDX because the write is pointless
independent of TDX." [1]

I think doing wrmsr*_safe() is OK. But, on TDX systems, it will end up taking a weird route:

WRMSR -> #VE -> hypercall -> ve_raise_fault() -> fixup_exception()

instead of the "normal" _safe route which goes:

WRMSR -> #GP -> ... -> fixup_exception()

So, we should probably make sure wrmsr*_safe() is fine on TDX before we subject ourselves to any additional churn. Kirill, can you test that out?

1. https://lore.kernel.org/all/87sfvljf5q.ffs@tglx/