On 07/03/2010 03:31 PM, Xiao Guangrong wrote:
Avi Kivity wrote:
I not move those code, just use common function instead, that it'sif (!direct) {
r = kvm_read_guest_atomic(vcpu->kvm,
gw->pte_gpa[level - 2],
&curr_pte, sizeof(curr_pte));
if (r || curr_pte != gw->ptes[level - 2]) {
kvm_mmu_put_page(shadow_page, sptep);
kvm_release_pfn_clean(pfn);
sptep = NULL;
break;
}
}
the code you moved... under what scenario is it not sufficient?
FNAME(check_level_mapping)(), there are do the same work.
And this check is not sufficient, since it's only checked if the
mapping is zapped or not exist, for other words only when broken this
judgment:
is_shadow_present_pte(*sptep)&& !is_large_pte(*sptep)
but if the middle level is present and it's not the large mapping,
this check is skipped.
Well, in the description, it looked like everything was using small pages (in kvm, level=1 means PTE level, we need to change this one day). Please describe again and say exactly when the guest or host uses large pages.