Re: [PATCH v5 3/4] KVM: arm64/mmu: use gfn_to_pfn_page

From: Sean Christopherson
Date: Thu Dec 30 2021 - 14:46:04 EST


On Mon, Nov 29, 2021, David Stevens wrote:
> @@ -1142,14 +1146,17 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>
> /* Mark the page dirty only if the fault is handled successfully */
> if (writable && !ret) {
> - kvm_set_pfn_dirty(pfn);
> + if (page)
> + kvm_set_pfn_dirty(pfn);

If kvm_set_page_dirty() is changed to be less dumb:

if (page)
kvm_set_page_dirty(page);

> mark_page_dirty_in_slot(kvm, memslot, gfn);
> }
>
> out_unlock:
> spin_unlock(&kvm->mmu_lock);
> - kvm_set_pfn_accessed(pfn);
> - kvm_release_pfn_clean(pfn);
> + if (page) {
> + kvm_set_pfn_accessed(pfn);
> + put_page(page);

Oof, KVM's helpers are stupid. Take a page, convert it to a pfn, then convert it
back to a page, just to mark it dirty or put a ref. Can you fold the below
(completely untested) patch in before the x86/arm64 patches? That way this code
can be:

if (page)
kvm_release_page_accessed(page);

and x86 can do:

if (fault->page)
kvm_release_page_clean(page);

instead of open-coding put_page().