Avi Kivity wrote:
On 05/30/2010 03:37 PM, Xiao Guangrong wrote:I use those in the next patch, it's not in kvm_mmu_zap_page(), it's used like:
Using kvm_mmu_prepare_zap_page() and kvm_mmu_commit_zap_page() toYou're adding two new functions but not using them here? Possibly in
split kvm_mmu_zap_page() function, then we can:
- traverse hlist safely
- easily to gather remote tlb flush which occurs during page zapped
+static int kvm_mmu_prepare_zap_page(struct kvm *kvm, struct
kvm_mmu_page *sp)
+{
+ int ret;
+
+ trace_kvm_mmu_zap_page(sp);
+ ++kvm->stat.mmu_shadow_zapped;
+ ret = mmu_zap_unsync_children(kvm, sp);
+ kvm_mmu_page_unlink_children(kvm, sp);
+ kvm_mmu_unlink_parents(kvm, sp);
+ if (!sp->role.invalid&& !sp->role.direct)
+ unaccount_shadowed(kvm, sp->gfn);
+ if (sp->unsync)
+ kvm_unlink_unsync_page(kvm, sp);
+ if (!sp->root_count)
+ /* Count self */
+ ret++;
+ else
+ kvm_reload_remote_mmus(kvm);
+
+ sp->role.invalid = 1;
+ list_move(&sp->link,&kvm->arch.invalid_mmu_pages);
+ kvm_mmu_reset_last_pte_updated(kvm);
+ return ret;
+}
+
+static void kvm_mmu_commit_zap_page(struct kvm *kvm)
+{
+ struct kvm_mmu_page *sp, *n;
+
+ if (list_empty(&kvm->arch.invalid_mmu_pages))
+ return;
+
+ kvm_flush_remote_tlbs(kvm);
+ list_for_each_entry_safe(sp, n,&kvm->arch.invalid_mmu_pages, link) {
+ WARN_ON(!sp->role.invalid);
+ if (!sp->root_count)
+ kvm_mmu_free_page(kvm, sp);
+ }
+}
+
the old kvm_mmu_zap_page()?
hold mmu spin lock
kvm_mmu_prepare_zap_page page A
kvm_mmu_prepare_zap_page page B
kvm_mmu_prepare_zap_page page C
......
kvm_mmu_commit_zap_page
This is a good idea, but belongs in a separate patch? We can use it toThis patch is very simple and kvm_mmu_commit_zap_page() function should depend on
reclaim invalid pages before allocating new ones.
kvm->arch.invalid_mmu_pages, so i think we on need separate this patch, your opinion? :-)