Re: [PATCH 10/11] KVM: MMU: fix detecting misaligned accessed

From: Avi Kivity
Date: Wed Jul 27 2011 - 05:15:38 EST


On 07/26/2011 02:31 PM, Xiao Guangrong wrote:
Sometimes, we only modify the last one byte of a pte to update status bit,
for example, clear_bit is used to clear r/w bit in linux kernel and 'andb'
instruction is used in this function, in this case, kvm_mmu_pte_write will
treat it as misaligned access, and the shadow page table is zapped

@@ -3597,6 +3597,14 @@ static bool detect_write_misaligned(struct kvm_mmu_page *sp, gpa_t gpa,

offset = offset_in_page(gpa);
pte_size = sp->role.cr4_pae ? 8 : 4;
+
+ /*
+ * Sometimes, the OS only writes the last one bytes to update status
+ * bits, for example, in linux, andb instruction is used in clear_bit().
+ */
+ if (sp->role.level == 1&& !(offset& (pte_size - 1))&& bytes == 1)
+ return false;
+

Could be true for level > 1, no?

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/