Re: [PATCH 1/5] KVM: x86: Add function to inject guest page fault with reserved bits set

From: Ben Gardon
Date: Thu Feb 27 2020 - 14:30:15 EST


On Thu, Feb 27, 2020 at 9:23 AM Mohammed Gamal <mgamal@xxxxxxxxxx> wrote:
>
> Signed-off-by: Mohammed Gamal <mgamal@xxxxxxxxxx>
> ---
> arch/x86/kvm/x86.c | 14 ++++++++++++++
> arch/x86/kvm/x86.h | 1 +
> 2 files changed, 15 insertions(+)
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 359fcd395132..434c55a8b719 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -10494,6 +10494,20 @@ u64 kvm_spec_ctrl_valid_bits(struct kvm_vcpu *vcpu)
> }
> EXPORT_SYMBOL_GPL(kvm_spec_ctrl_valid_bits);
>
> +void kvm_inject_rsvd_bits_pf(struct kvm_vcpu *vcpu, gpa_t gpa)
> +{
> + struct x86_exception fault;
> +
> + fault.vector = PF_VECTOR;
> + fault.error_code_valid = true;
> + fault.error_code = PFERR_RSVD_MASK;
> + fault.nested_page_fault = false;
> + fault.address = gpa;
> +
> + kvm_inject_page_fault(vcpu, &fault);
> +}
> +EXPORT_SYMBOL_GPL(kvm_inject_rsvd_bits_pf);
> +

There are calls to kvm_mmu_page_fault in arch/x86/kvm/mmu/mmu.c that
don't get the check and injected page fault in the later patches in
this series. Is the check not needed in those cases?

> EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_exit);
> EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_fast_mmio);
> EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_inj_virq);
> diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
> index 3624665acee4..7d8ab28a6983 100644
> --- a/arch/x86/kvm/x86.h
> +++ b/arch/x86/kvm/x86.h
> @@ -276,6 +276,7 @@ int kvm_mtrr_get_msr(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata);
> bool kvm_mtrr_check_gfn_range_consistency(struct kvm_vcpu *vcpu, gfn_t gfn,
> int page_num);
> bool kvm_vector_hashing_enabled(void);
> +void kvm_inject_rsvd_bits_pf(struct kvm_vcpu *vcpu, gpa_t gpa);
> int x86_emulate_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
> int emulation_type, void *insn, int insn_len);
> enum exit_fastpath_completion handle_fastpath_set_msr_irqoff(struct kvm_vcpu *vcpu);
> --
> 2.21.1
>