Re: [PATCH 4/6] kvm/x86/mmu: handle invlpg on large pages
From: Joerg Roedel
Date: Fri Mar 06 2009 - 08:06:56 EST
On Thu, Mar 05, 2009 at 06:11:22PM -0300, Marcelo Tosatti wrote:
> On Thu, Mar 05, 2009 at 01:12:31PM +0100, Joerg Roedel wrote:
> > Signed-off-by: Joerg Roedel <joerg.roedel@xxxxxxx>
> > ---
> > arch/x86/kvm/paging_tmpl.h | 12 +++++++++---
> > 1 files changed, 9 insertions(+), 3 deletions(-)
> >
> > diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
> > index 79668ba..aa79396 100644
> > --- a/arch/x86/kvm/paging_tmpl.h
> > +++ b/arch/x86/kvm/paging_tmpl.h
> > @@ -441,6 +441,7 @@ out_unlock:
> > static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva)
> > {
> > struct kvm_shadow_walk_iterator iterator;
> > + struct kvm_mmu_page *sp;
> > pt_element_t gpte;
> > gpa_t pte_gpa = -1;
> > int level;
> > @@ -451,12 +452,17 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva)
> > for_each_shadow_entry(vcpu, gva, iterator) {
> > level = iterator.level;
> > sptep = iterator.sptep;
> > + sp = page_header(__pa(sptep));
> > +
> > + if (sp->role.direct) {
> > + /* mapped from a guest's large_pte */
> > + kvm_mmu_zap_page(vcpu->kvm, sp);
> > + kvm_flush_remote_tlbs(vcpu->kvm);
> > + return;
> > + }
>
> If the guest has 32-bit pte's there might be:
>
> - two large shadow entries to cover 4MB
> - one large shadow entry and one shadow page with 512 4k entries
> - two shadow pages with 512 4k entries each
>
> So need to cover all this cases.
Right. Thanks for pointing this out. I will post an updated version of
this patch.
Joerg
--
| Advanced Micro Devices GmbH
Operating | Karl-Hammerschmidt-Str. 34, 85609 Dornach bei MÃnchen
System |
Research | GeschÃftsfÃhrer: Jochen Polster, Thomas M. McCoy, Giuliano Meroni
Center | Sitz: Dornach, Gemeinde Aschheim, Landkreis MÃnchen
| Registergericht MÃnchen, HRB Nr. 43632
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/