[PATCH v9 08/10] mmu: spp: Handle SPP protected pages when VM memory changes

From: Yang Weijiang
Date: Fri Dec 06 2019 - 03:25:26 EST


Host page swapping/migration may change the translation in
EPT leaf entry, if the target page is SPP protected,
re-enable SPP protection in MMU notifier. If SPPT shadow
page is reclaimed, the level1 pages don't have rmap to clear.

Signed-off-by: Yang Weijiang <weijiang.yang@xxxxxxxxx>
---
arch/x86/kvm/mmu/mmu.c | 17 +++++++++++++++++
1 file changed, 17 insertions(+)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 0e2651f3e30c..f39096735e75 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -1828,6 +1828,19 @@ static int kvm_set_pte_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head,
new_spte &= ~PT_WRITABLE_MASK;
new_spte &= ~SPTE_HOST_WRITEABLE;

+ /*
+ * if it's EPT leaf entry and the physical page is
+ * SPP protected, then re-enable SPP protection for
+ * the page.
+ */
+ if (kvm->arch.spp_active &&
+ level == PT_PAGE_TABLE_LEVEL) {
+ u32 *access = gfn_to_subpage_wp_info(slot, gfn);
+
+ if (access && *access != FULL_SPP_ACCESS)
+ new_spte |= PT_SPP_MASK;
+ }
+
new_spte = mark_spte_for_access_track(new_spte);

mmu_spte_clear_track_bits(sptep);
@@ -2677,6 +2690,10 @@ static bool mmu_page_zap_pte(struct kvm *kvm, struct kvm_mmu_page *sp,
pte = *spte;
if (is_shadow_present_pte(pte)) {
if (is_last_spte(pte, sp->role.level)) {
+ /* SPPT leaf entries don't have rmaps*/
+ if (sp->role.level == PT_PAGE_TABLE_LEVEL &&
+ is_spp_spte(sp))
+ return true;
drop_spte(kvm, spte);
if (is_large_pte(pte))
--kvm->stat.lpages;
--
2.17.2