See how the pte is reread inside fetch with mmu_lock held.It looks like something is broken in 'fetch' functions, this patch will
fix it.
Subject: [PATCH] KVM: MMU: fix last level broken in FNAME(fetch)
We read the guest level out of 'mmu_lock', sometimes, the host mapping is
confusion. Consider this case:
VCPU0: VCPU1
Read guest mapping, assume the mapping is:
GLV3 -> GLV2 -> GLV1 -> GFNA,
And in the host, the corresponding mapping is
HLV3 -> HLV2 -> HLV1(P=0)
Write GLV1 and cause the
mapping point to GFNB
(May occur in pte_write or
invlpg path)
Mapping GLV1 to GFNA
This issue only occurs in the last indirect mapping, since if the middle
mapping is changed, the mapping will be zapped, then it will be detected
in the FNAME(fetch) path, but when it map the last level, it not checked.
Fixed by also check the last level.
@@ -322,6 +334,12 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
level = iterator.level;
sptep = iterator.sptep;
if (iterator.level == hlevel) {
+ if (check&& level == gw->level&&
+ !FNAME(check_level_mapping)(vcpu, gw, hlevel)) {
+ kvm_release_pfn_clean(pfn);
+ break;
+ }
+
mmu_set_spte(vcpu, sptep, access,
gw->pte_access& access,
user_fault, write_fault,
@@ -376,10 +394,10 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
sp = kvm_mmu_get_page(vcpu, table_gfn, addr, level-1,
direct, access, sptep);
if (!direct) {
- r = kvm_read_guest_atomic(vcpu->kvm,
- gw->pte_gpa[level - 2],
- &curr_pte, sizeof(curr_pte));
- if (r || curr_pte != gw->ptes[level - 2]) {
+ if (hlevel == level - 1)
+ check = false;
+
+ if (!FNAME(check_level_mapping)(vcpu, gw, level - 1)) {