[PATCH v11 07/10] mmu: spp: Re-enable SPP protection when EPT mapping changes

From: Yang Weijiang
Date: Sat Jan 18 2020 - 23:00:54 EST


Host page swapping/migration may change the translation in
EPT leaf entry, if the target page is SPP protected,
re-enable SPP protection. When SPPT mmu-page is reclaimed,
no need to clear rmap as no memory-mapping is in SPPT L4E.

Signed-off-by: Yang Weijiang <weijiang.yang@xxxxxxxxx>
---
arch/x86/kvm/mmu/mmu.c | 17 +++++++++++++++++
1 file changed, 17 insertions(+)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index fe14f60928a2..099f92f0c42a 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -1918,6 +1918,19 @@ static int kvm_set_pte_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head,
new_spte &= ~PT_WRITABLE_MASK;
new_spte &= ~SPTE_HOST_WRITEABLE;

+ /*
+ * if it's EPT leaf entry and the physical page is
+ * SPP protected, then re-enable SPP protection for
+ * the page.
+ */
+ if (kvm->arch.spp_active &&
+ level == PT_PAGE_TABLE_LEVEL) {
+ u32 *access = gfn_to_subpage_wp_info(slot, gfn);
+
+ if (access && *access != FULL_SPP_ACCESS)
+ new_spte |= PT_SPP_MASK;
+ }
+
new_spte = mark_spte_for_access_track(new_spte);

mmu_spte_clear_track_bits(sptep);
@@ -2768,6 +2781,10 @@ static bool mmu_page_zap_pte(struct kvm *kvm, struct kvm_mmu_page *sp,
pte = *spte;
if (is_shadow_present_pte(pte)) {
if (is_last_spte(pte, sp->role.level)) {
+ /* SPPT leaf entries don't have rmaps*/
+ if (sp->role.spp && sp->role.level ==
+ PT_PAGE_TABLE_LEVEL)
+ return true;
drop_spte(kvm, spte);
if (is_large_pte(pte))
--kvm->stat.lpages;
--
2.17.2