[PATCH v2 1/1] kvm/mmu_notifier: re-enable the change_pte() optimization.

From: jglisse
Date: Wed Feb 20 2019 - 20:22:56 EST


From: JÃrÃme Glisse <jglisse@xxxxxxxxxx>

Since changes to mmu notifier the change_pte() optimization was lost
for kvm. This re-enable it, when ever a pte is going from read and
write to read only with same pfn, or from read only to read and write
with different pfn.

It is safe to update the secondary MMUs, because the primary MMU
pte invalidate must have already happened with a ptep_clear_flush()
before set_pte_at_notify() is invoked (and thus before change_pte()
callback).

Signed-off-by: JÃrÃme Glisse <jglisse@xxxxxxxxxx>
Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx>
Cc: Peter Xu <peterx@xxxxxxxxxx>
Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---
virt/kvm/kvm_main.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 629760c0fb95..0f979f02bf1c 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -369,6 +369,14 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
int need_tlb_flush = 0, idx;
int ret;

+ /*
+ * Nothing to do when using change_pte() which will be call for each
+ * individual pte update at the right time. See mmu_notifier.h for more
+ * informations.
+ */
+ if (mmu_notifier_range_use_change_pte(range))
+ return 0;
+
idx = srcu_read_lock(&kvm->srcu);
spin_lock(&kvm->mmu_lock);
/*
@@ -399,6 +407,14 @@ static void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn,
{
struct kvm *kvm = mmu_notifier_to_kvm(mn);

+ /*
+ * Nothing to do when using change_pte() which will be call for each
+ * individual pte update at the right time. See mmu_notifier.h for more
+ * informations.
+ */
+ if (mmu_notifier_range_use_change_pte(range))
+ return;
+
spin_lock(&kvm->mmu_lock);
/*
* This sequence increase will notify the kvm page fault that
--
2.17.2