On 07/06/2010 01:45 PM, Xiao Guangrong wrote:'walk_addr' is out of mmu_lock's protection, so while we handle 'fetch',
then guest's mapping has modifited by other vcpu's write path, such as
invlpg, pte_write and other fetch path
Fixed by checking all level's mapping
@@ -319,22 +319,23 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
direct_access&= ~ACC_WRITE_MASK;
for_each_shadow_entry(vcpu, addr, iterator) {
+ bool nonpresent = false, last_mapping = false;
+
I don't like these two new variables, but no suggestion at the moment. I'll try to simplify this loop later.
One idea may be:
while (level > walker.level) {
handle indirect pages
}
while (level > hlevel) {
handle direct pages
}
handle last spte
I'm worried that this change is too big for backporting, but no suggestions on how to make it smaller, so we'll have to accept it.