Re: [PATCH 22/27] KVM: MMU: Refactor mmu_alloc_roots function

From: Marcelo Tosatti
Date: Tue Sep 07 2010 - 17:34:49 EST


On Mon, Sep 06, 2010 at 05:56:01PM +0200, Joerg Roedel wrote:
> This patch factors out the direct-mapping paths of the
> mmu_alloc_roots function into a seperate function. This
> makes it a lot easier to avoid all the unnecessary checks
> done in the shadow path which may break when running direct.
> In fact, this patch already fixes a problem when running PAE
> guests on a PAE shadow page table.
>
> Signed-off-by: Joerg Roedel <joerg.roedel@xxxxxxx>
> ---
> arch/x86/kvm/mmu.c | 82 ++++++++++++++++++++++++++++++++++++++--------------
> 1 files changed, 60 insertions(+), 22 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 3663d1c..e7e5527 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c

> +
> +static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
> {
> int i;
> gfn_t root_gfn;
> struct kvm_mmu_page *sp;
> - int direct = 0;
> u64 pdptr;
>
> root_gfn = vcpu->arch.mmu.get_cr3(vcpu) >> PAGE_SHIFT;
>
> - if (vcpu->arch.mmu.shadow_root_level == PT64_ROOT_LEVEL) {
> + if (mmu_check_root(vcpu, root_gfn))
> + return 1;
> +
> + /*
> + * Do we shadow a long mode page table? If so we need to
> + * write-protect the guests page table root.
> + */
> + if (vcpu->arch.mmu.root_level == PT64_ROOT_LEVEL) {
> hpa_t root = vcpu->arch.mmu.root_hpa;
>
> ASSERT(!VALID_PAGE(root));
> - if (mmu_check_root(vcpu, root_gfn))
> - return 1;
> - if (vcpu->arch.mmu.direct_map) {
> - direct = 1;
> - root_gfn = 0;
> - }
> +
> spin_lock(&vcpu->kvm->mmu_lock);
> kvm_mmu_free_some_pages(vcpu);
> - sp = kvm_mmu_get_page(vcpu, root_gfn, 0,
> - PT64_ROOT_LEVEL, direct,
> - ACC_ALL, NULL);
> + sp = kvm_mmu_get_page(vcpu, root_gfn, 0, PT64_ROOT_LEVEL,
> + 0, ACC_ALL, NULL);
> root = __pa(sp->spt);
> ++sp->root_count;
> spin_unlock(&vcpu->kvm->mmu_lock);
> vcpu->arch.mmu.root_hpa = root;
> return 0;
> }
> - direct = !is_paging(vcpu);
> -
> - if (mmu_check_root(vcpu, root_gfn))
> - return 1;
>
> + /*
> + * We shadow a 32 bit page table. This may be a legacy 2-level
> + * or a PAE 3-level page table.
> + */
> for (i = 0; i < 4; ++i) {
> hpa_t root = vcpu->arch.mmu.pae_root[i];
>
> @@ -2406,16 +2441,11 @@ static int mmu_alloc_roots(struct kvm_vcpu *vcpu)
> root_gfn = pdptr >> PAGE_SHIFT;
> if (mmu_check_root(vcpu, root_gfn))
> return 1;
> - } else if (vcpu->arch.mmu.root_level == 0)
> - root_gfn = 0;
> - if (vcpu->arch.mmu.direct_map) {
> - direct = 1;
> - root_gfn = i << 30;
> }
> spin_lock(&vcpu->kvm->mmu_lock);
> kvm_mmu_free_some_pages(vcpu);
> sp = kvm_mmu_get_page(vcpu, root_gfn, i << 30,
> - PT32_ROOT_LEVEL, direct,
> + PT32_ROOT_LEVEL, 0,
> ACC_ALL, NULL);

Should not write protect the gfn for nonpaging mode.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/