Re: [PATCH v10 21/27] KVM: x86: Save and reload SSP to/from SMRAM

From: Sean Christopherson
Date: Wed May 01 2024 - 18:50:27 EST


On Sun, Feb 18, 2024, Yang Weijiang wrote:
> Save CET SSP to SMRAM on SMI and reload it on RSM. KVM emulates HW arch
> behavior when guest enters/leaves SMM mode,i.e., save registers to SMRAM
> at the entry of SMM and reload them at the exit to SMM. Per SDM, SSP is
> one of such registers on 64-bit Arch, and add the support for SSP. Note,
> on 32-bit Arch, SSP is not defined in SMRAM, so fail 32-bit CET guest
> launch.
>
> Suggested-by: Sean Christopherson <seanjc@xxxxxxxxxx>
> Signed-off-by: Yang Weijiang <weijiang.yang@xxxxxxxxx>
> Reviewed-by: Maxim Levitsky <mlevitsk@xxxxxxxxxx>
> ---
> arch/x86/kvm/cpuid.c | 11 +++++++++++
> arch/x86/kvm/smm.c | 8 ++++++++
> arch/x86/kvm/smm.h | 2 +-
> 3 files changed, 20 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index 2bb1931103ad..c0e13040e35b 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -149,6 +149,17 @@ static int kvm_check_cpuid(struct kvm_vcpu *vcpu,
> if (vaddr_bits != 48 && vaddr_bits != 57 && vaddr_bits != 0)
> return -EINVAL;
> }
> + /*
> + * Prevent 32-bit guest launch if shadow stack is exposed as SSP
> + * state is not defined for 32-bit SMRAM.

Why? Lack of save/restore for SSP on 32-bit guests is a gap in Intel's
architecture, I don't see why KVM should diverge from hardware. I.e. just do
nothing for SSP on SMI/RSM, because that's exactly what the architecture says
will happen.

> + */
> + best = cpuid_entry2_find(entries, nent, 0x80000001,
> + KVM_CPUID_INDEX_NOT_SIGNIFICANT);
> + if (best && !(best->edx & F(LM))) {
> + best = cpuid_entry2_find(entries, nent, 0x7, 0);
> + if (best && (best->ecx & F(SHSTK)))
> + return -EINVAL;
> + }
>
> /*
> * Exposing dynamic xfeatures to the guest requires additional
> diff --git a/arch/x86/kvm/smm.c b/arch/x86/kvm/smm.c
> index 45c855389ea7..7aac9c54c353 100644
> --- a/arch/x86/kvm/smm.c
> +++ b/arch/x86/kvm/smm.c
> @@ -275,6 +275,10 @@ static void enter_smm_save_state_64(struct kvm_vcpu *vcpu,
> enter_smm_save_seg_64(vcpu, &smram->gs, VCPU_SREG_GS);
>
> smram->int_shadow = static_call(kvm_x86_get_interrupt_shadow)(vcpu);
> +
> + if (guest_can_use(vcpu, X86_FEATURE_SHSTK))
> + KVM_BUG_ON(kvm_msr_read(vcpu, MSR_KVM_SSP, &smram->ssp),
> + vcpu->kvm);
> }
> #endif
>
> @@ -564,6 +568,10 @@ static int rsm_load_state_64(struct x86_emulate_ctxt *ctxt,
> static_call(kvm_x86_set_interrupt_shadow)(vcpu, 0);
> ctxt->interruptibility = (u8)smstate->int_shadow;
>
> + if (guest_can_use(vcpu, X86_FEATURE_SHSTK))
> + KVM_BUG_ON(kvm_msr_write(vcpu, MSR_KVM_SSP, smstate->ssp),
> + vcpu->kvm);


This should synthesize triple-fault, not WARN and kill the VM, as the value to
be restored is guest controlled (the guest can scribble SMRAM from within the
SMI handler).

At that point, I would just synthesize triple-fault for the read path too.