Re: [PATCH v6 1/7] KVM: x86: Deflect unknown MSR accesses to user space

From: Aaron Lewis
Date: Thu Sep 03 2020 - 15:27:35 EST


> +::
> +
> + /* KVM_EXIT_X86_RDMSR / KVM_EXIT_X86_WRMSR */
> + struct {
> + __u8 error; /* user -> kernel */
> + __u8 pad[3];
> + __u32 reason; /* kernel -> user */
> + __u32 index; /* kernel -> user */
> + __u64 data; /* kernel <-> user */
> + } msr;
> +
> +Used on x86 systems. When the VM capability KVM_CAP_X86_USER_SPACE_MSR is
> +enabled, MSR accesses to registers that would invoke a #GP by KVM kernel code
> +will instead trigger a KVM_EXIT_X86_RDMSR exit for reads and KVM_EXIT_X86_WRMSR
> +exit for writes.
> +
> +The "reason" field specifies why the MSR trap occurred. User space will only
> +receive MSR exit traps when a particular reason was requested during through
> +ENABLE_CAP. Currently valid exit reasons are:
> +
> + KVM_MSR_EXIT_REASON_INVAL - access to invalid MSRs or reserved bits


Can we also have ENOENT?
KVM_MSR_EXIT_REASON_ENOENT - Unknown MSR

>
> +
> +For KVM_EXIT_X86_RDMSR, the "index" field tells user space which MSR the guest
> +wants to read. To respond to this request with a successful read, user space
> +writes the respective data into the "data" field and must continue guest
> +execution to ensure the read data is transferred into guest register state.
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 88c593f83b28..4d285bf054fb 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -1549,12 +1549,88 @@ int kvm_set_msr(struct kvm_vcpu *vcpu, u32 index, u64 data)
> }
> EXPORT_SYMBOL_GPL(kvm_set_msr);
>
> +static int complete_emulated_msr(struct kvm_vcpu *vcpu, bool is_read)
> +{
> + if (vcpu->run->msr.error) {
> + kvm_inject_gp(vcpu, 0);

Add return 1. The RIP doesn’t advance when the instruction raises a fault.

>
> + } else if (is_read) {
> + kvm_rax_write(vcpu, (u32)vcpu->run->msr.data);
> + kvm_rdx_write(vcpu, vcpu->run->msr.data >> 32);
> + }
> +
> + return kvm_skip_emulated_instruction(vcpu);
> +}
> +
> +static int complete_emulated_rdmsr(struct kvm_vcpu *vcpu)
> +{
> + return complete_emulated_msr(vcpu, true);
> +}
> +
>
> /* For KVM_EXIT_INTERNAL_ERROR */
> /* Emulate instruction failed. */
> @@ -412,6 +414,15 @@ struct kvm_run {
> __u64 esr_iss;
> __u64 fault_ipa;
> } arm_nisv;
> + /* KVM_EXIT_X86_RDMSR / KVM_EXIT_X86_WRMSR */
> + struct {
> + __u8 error; /* user -> kernel */
> + __u8 pad[3];

__u8 pad[7] to maintain 8 byte alignment? unless we can get away with
fewer bits for 'reason' and
get them from 'pad'.

> +#define KVM_MSR_EXIT_REASON_INVAL (1 << 0)
> + __u32 reason; /* kernel -> user */
> + __u32 index; /* kernel -> user */
> + __u64 data; /* kernel <-> user */
> + } msr;
> /* Fix the size of the union. */
> char padding[256];
> };
> @@ -1031,6 +1042,7 @@ struct kvm_ppc_resize_hpt {
> #define KVM_CAP_PPC_SECURE_GUEST 181
> #define KVM_CAP_HALT_POLL 182
> #define KVM_CAP_ASYNC_PF_INT 183
> +#define KVM_CAP_X86_USER_SPACE_MSR 184
>
> #ifdef KVM_CAP_IRQ_ROUTING
>
> --
> 2.17.1
>
>
>
>
> Amazon Development Center Germany GmbH
> Krausenstr. 38
> 10117 Berlin
> Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
> Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
> Sitz: Berlin
> Ust-ID: DE 289 237 879
>
>
>