Re: [PATCH 2/6] KVM MMU: fix kvm_mmu_zap_page() and its calling path

From: Xiao Guangrong
Date: Mon Apr 12 2010 - 04:56:33 EST




Avi Kivity wrote:

>
>> kvm->arch.n_free_mmu_pages = 0;
>> @@ -1589,7 +1589,8 @@ static void mmu_unshadow(struct kvm *kvm, gfn_t
>> gfn)
>> && !sp->role.invalid) {
>> pgprintk("%s: zap %lx %x\n",
>> __func__, gfn, sp->role.word);
>> - kvm_mmu_zap_page(kvm, sp);
>> + if (kvm_mmu_zap_page(kvm, sp))
>> + nn = bucket->first;
>> }
>> }
>>
>
> I don't understand why this is needed.

There is the code segment in mmu_unshadow():

|hlist_for_each_entry_safe(sp, node, nn, bucket, hash_link) {
| if (sp->gfn == gfn && !sp->role.direct
| && !sp->role.invalid) {
| pgprintk("%s: zap %lx %x\n",
| __func__, gfn, sp->role.word);
| kvm_mmu_zap_page(kvm, sp);
| }
| }

in the loop, if nn is zapped, hlist_for_each_entry_safe() access nn will
cause crash. and it's checked in other functions, such as kvm_mmu_zap_all(),
kvm_mmu_unprotect_page()...

Thanks,
Xiao

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/