Re: [RFC PATCH] kvm: calculate correct gfn for small host pages whichemulates large guest pages

From: Avi Kivity
Date: Mon May 10 2010 - 04:56:06 EST


On 04/30/2010 05:41 AM, Lai Jiangshan wrote:
Lai Jiangshan wrote:
RFC, because maybe I missing something with the old code.

Frome: Lai Jiangshan<laijs@xxxxxxxxxxxxxx>

In Document/kvm/mmu.txt:
gfn:
Either the guest page table containing the translations shadowed by this
page, or the base page frame for linear translations. See role.direct.

But in function FNAME(fetch)(), sp->gfn is incorrect when one of following
situations occurred:
1) guest is 32bit paging and guest uses pse-36 and the guest PDE maps
a 4-MByte page(backed by 4k host pages) and bits 20:13 of the guest PDE
is not equals to 0.
2) guest is long mode paging and the guest PDPTE maps a 1-GByte page
(backed by 4k or 2M host pages)

Resend this patch with the changelog changed.

As Marcelo Tosatti and Gui Jianfeng points out,
FNAME(fetch)() miss quadrant on 4mb large page emulation with shadow.

Subject: [PATCH] kvm: calculate correct gfn for small host pages which emulates large guest pages

In Document/kvm/mmu.txt:
gfn:
Either the guest page table containing the translations shadowed by this
page, or the base page frame for linear translations. See role.direct.

But in function FNAME(fetch)(), sp->gfn is incorrect when one of following
situations occurred:
1) guest is 32bit paging and the guest PDE maps a 4-MByte page
(backed by 4k host pages), FNAME(fetch)() miss handling the quadrant.

And if guest use pse-36, "table_gfn = gpte_to_gfn(gw->ptes[level - delta]);"
is incorrect.
2) guest is long mode paging and the guest PDPTE maps a 1-GByte page
(backed by 4k or 2M host pages).

So we fix it to suit to the document and suit to the code which
requires sp->gfn correct when sp->role.direct=1.

We use the goal mapping gfn(gw->gfn) to calculate the base page frame
for linear translations, it is simple and easy to be understood.

Signed-off-by: Lai Jiangshan<laijs@xxxxxxxxxxxxxx>
---
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 702c016..958e9c6 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -338,10 +338,13 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
direct = 1;
if (!is_dirty_gpte(gw->ptes[level - delta]))
access&= ~ACC_WRITE_MASK;
- table_gfn = gpte_to_gfn(gw->ptes[level - delta]);
- /* advance table_gfn when emulating 1gb pages with 4k */
- if (delta == 0)
- table_gfn += PT_INDEX(addr, level);
+ /*
+ * It is a large guest pages backed by small host pages,
+ * So we set @direct(@shadow_page->role.direct)=1, and
+ * set @table_gfn(@shadow_page->gfn)=the base page frame
+ * for linear translations.
+ */
+ table_gfn = gw->gfn& ~(KVM_PAGES_PER_HPAGE(level) - 1);
} else {
direct = 0;
table_gfn = gw->table_gfn[level - 2];

Looks good, indeed it is a lot easier to understand than the original calculation (a minor issue is that the variable name is misleading, but that's a problem with kvm_mmu_page definition and not this patch).

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/