Re: [PATCH 1/4] KVM: VMX: Always clear vmx->fail on emulation_required

From: Maxim Levitsky
Date: Tue Dec 14 2021 - 04:12:34 EST


On Tue, 2021-12-07 at 19:30 +0000, Sean Christopherson wrote:
> Revert a relatively recent change that set vmx->fail if the vCPU is in L2
> and emulation_required is true, as that behavior is completely bogus.
> Setting vmx->fail and synthesizing a VM-Exit is contradictory and wrong:
>
> (a) it's impossible to have both a VM-Fail and VM-Exit
> (b) vmcs.EXIT_REASON is not modified on VM-Fail
> (c) emulation_required refers to guest state and guest state checks are
> always VM-Exits, not VM-Fails.
>
> For KVM specifically, emulation_required is handled before nested exits
> in __vmx_handle_exit(), thus setting vmx->fail has no immediate effect,
> i.e. KVM calls into handle_invalid_guest_state() and vmx->fail is ignored.
> Setting vmx->fail can ultimately result in a WARN in nested_vmx_vmexit()
> firing when tearing down the VM as KVM never expects vmx->fail to be set
> when L2 is active, KVM always reflects those errors into L1.
>
> ------------[ cut here ]------------
> WARNING: CPU: 0 PID: 21158 at arch/x86/kvm/vmx/nested.c:4548
> nested_vmx_vmexit+0x16bd/0x17e0
> arch/x86/kvm/vmx/nested.c:4547
> Modules linked in:
> CPU: 0 PID: 21158 Comm: syz-executor.1 Not tainted 5.16.0-rc3-syzkaller #0
> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
> RIP: 0010:nested_vmx_vmexit+0x16bd/0x17e0 arch/x86/kvm/vmx/nested.c:4547
> Code: <0f> 0b e9 2e f8 ff ff e8 57 b3 5d 00 0f 0b e9 00 f1 ff ff 89 e9 80
> Call Trace:
> vmx_leave_nested arch/x86/kvm/vmx/nested.c:6220 [inline]
> nested_vmx_free_vcpu+0x83/0xc0 arch/x86/kvm/vmx/nested.c:330
> vmx_free_vcpu+0x11f/0x2a0 arch/x86/kvm/vmx/vmx.c:6799
> kvm_arch_vcpu_destroy+0x6b/0x240 arch/x86/kvm/x86.c:10989
> kvm_vcpu_destroy+0x29/0x90 arch/x86/kvm/../../../virt/kvm/kvm_main.c:441
> kvm_free_vcpus arch/x86/kvm/x86.c:11426 [inline]
> kvm_arch_destroy_vm+0x3ef/0x6b0 arch/x86/kvm/x86.c:11545
> kvm_destroy_vm arch/x86/kvm/../../../virt/kvm/kvm_main.c:1189 [inline]
> kvm_put_kvm+0x751/0xe40 arch/x86/kvm/../../../virt/kvm/kvm_main.c:1220
> kvm_vcpu_release+0x53/0x60 arch/x86/kvm/../../../virt/kvm/kvm_main.c:3489
> __fput+0x3fc/0x870 fs/file_table.c:280
> task_work_run+0x146/0x1c0 kernel/task_work.c:164
> exit_task_work include/linux/task_work.h:32 [inline]
> do_exit+0x705/0x24f0 kernel/exit.c:832
> do_group_exit+0x168/0x2d0 kernel/exit.c:929
> get_signal+0x1740/0x2120 kernel/signal.c:2852
> arch_do_signal_or_restart+0x9c/0x730 arch/x86/kernel/signal.c:868
> handle_signal_work kernel/entry/common.c:148 [inline]
> exit_to_user_mode_loop kernel/entry/common.c:172 [inline]
> exit_to_user_mode_prepare+0x191/0x220 kernel/entry/common.c:207
> __syscall_exit_to_user_mode_work kernel/entry/common.c:289 [inline]
> syscall_exit_to_user_mode+0x2e/0x70 kernel/entry/common.c:300
> do_syscall_64+0x53/0xd0 arch/x86/entry/common.c:86
> entry_SYSCALL_64_after_hwframe+0x44/0xae
>
> Fixes: c8607e4a086f ("KVM: x86: nVMX: don't fail nested VM entry on invalid guest state if !from_vmentry")
> Reported-by: syzbot+f1d2136db9c80d4733e8@xxxxxxxxxxxxxxxxxxxxxxxxx
> Cc: Maxim Levitsky <mlevitsk@xxxxxxxxxx>
> Cc: stable@xxxxxxxxxxxxxxx
> Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx>
> ---
> arch/x86/kvm/vmx/vmx.c | 4 +---
> 1 file changed, 1 insertion(+), 3 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index efcc5a58abbc..9e415e5a91ab 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -6631,9 +6631,7 @@ static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu)
> * consistency check VM-Exit due to invalid guest state and bail.
> */
> if (unlikely(vmx->emulation_required)) {
> -
> - /* We don't emulate invalid state of a nested guest */
> - vmx->fail = is_guest_mode(vcpu);
> + vmx->fail = 0;
>
> vmx->exit_reason.full = EXIT_REASON_INVALID_STATE;
> vmx->exit_reason.failed_vmentry = 1;

Now after swapping in all of the gory details, this does make sense.

Reviewed-by: Maxim Levitsky <mlevitsk@xxxxxxxxxx>
Best regards,
Maxim Levitsky