In a previous patch, we removed the 'nr_to_scan' tracking.
It was not being used to track the number of objects
scanned, so we stopped using it entirely. Here, we
strart using it again.
The theory here is simple; if we already have the refcount
and the kvm->mmu_lock, then we should do as much work as
possible under the lock. The downside is that we're less
fair about the KVM instances from which we reclaim. Each
call to mmu_shrink() will tend to "pick on" one instance,
after which it gets moved to the end of the list and left
alone for a while.
If mmu_shrink() has already done a significant amount of
scanning, the use of 'nr_to_scan' inside shrink_kvm_mmu()
will also ensure that we do not over-reclaim when we have
already done a lot of work in this call.
In the end, this patch defines a "scan" as:
1. An attempt to acquire a refcount on a 'struct kvm'
2. freeing a kvm mmu page
This would probably be most ideal if we can expose some
of the work done by kvm_mmu_remove_some_alloc_mmu_pages()
as also counting as scanning, but I think we have churned
enough for the moment.
Signed-off-by: Dave Hansen<dave@xxxxxxxxxxxxxxxxxx>
---
linux-2.6.git-dave/arch/x86/kvm/mmu.c | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)
diff -puN arch/x86/kvm/mmu.c~make-shrinker-more-aggressive arch/x86/kvm/mmu.c
--- linux-2.6.git/arch/x86/kvm/mmu.c~make-shrinker-more-aggressive 2010-06-14 11:30:44.000000000 -0700
+++ linux-2.6.git-dave/arch/x86/kvm/mmu.c 2010-06-14 11:38:04.000000000 -0700
@@ -2935,8 +2935,10 @@ static int shrink_kvm_mmu(struct kvm *kv
idx = srcu_read_lock(&kvm->srcu);
spin_lock(&kvm->mmu_lock);
- if (kvm->arch.n_used_mmu_pages> 0)
- freed_pages = kvm_mmu_remove_some_alloc_mmu_pages(kvm);
+ while (nr_to_scan> 0&& kvm->arch.n_used_mmu_pages> 0) {
+ freed_pages += kvm_mmu_remove_some_alloc_mmu_pages(kvm);
+ nr_to_scan--;
+ }