Re: [PATCH 2/2] KVM: x86/mmu: Add helper to consolidate huge page promotion

From: Paolo Bonzini
Date: Wed Nov 06 2019 - 12:22:34 EST


On 06/11/19 18:07, Sean Christopherson wrote:
> */
> - if (!is_error_noslot_pfn(pfn) && !kvm_is_reserved_pfn(pfn) &&
> - !kvm_is_zone_device_pfn(pfn) && level == PT_PAGE_TABLE_LEVEL &&
> - PageTransCompoundMap(pfn_to_page(pfn)) &&
> + if (level == PT_PAGE_TABLE_LEVEL && kvm_is_hugepage_allowed(pfn) &&
> !mmu_gfn_lpage_is_disallowed(vcpu, gfn, PT_DIRECTORY_LEVEL)) {
> unsigned long mask;
> /*
> @@ -5914,9 +5919,7 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm,
> * the guest, and the guest page table is using 4K page size
> * mapping if the indirect sp has level = 1.
> */
> - if (sp->role.direct && !kvm_is_reserved_pfn(pfn) &&
> - !kvm_is_zone_device_pfn(pfn) &&
> - PageTransCompoundMap(pfn_to_page(pfn))) {
> + if (sp->role.direct && kvm_is_hugepage_allowed(pfn)) {
> pte_list_remove(rmap_head, sptep);

I don't think is_error_noslot_pfn(pfn) makes sense in
kvm_mmu_zap_collapsible_spte, so I'd rather keep it in
transparent_hugepage_adjust. Actually, it must be is_noslot_pfn only at
this point---error pfns have been sieved earlier in handle_abnormal_pfn,
so perhaps

if (WARN_ON_ONCE(is_error_pfn(pfn)) || is_noslot_pfn(pfn))
return;

if (level == PT_PAGE_TABLE_LEVEL &&
kvm_is_hugepage_allowed(pfn) &&
!mmu_gfn_lpage_is_disallowed(vcpu, gfn, PT_DIRECTORY_LEVEL))

would be the best option.

Paolo