Re: [PATCH] kvm cleanup: Introduce sibling_pte and do cleanup forreverse map and parent_pte

From: Avi Kivity
Date: Tue Aug 03 2010 - 02:51:43 EST

On 08/03/2010 05:30 AM, Lai Jiangshan wrote:
This patch is just a big cleanup. it reduces 220 lines of code.

It introduces sibling_pte array for tracking identical sptes, so the
identical sptes can be linked as a single linked list by their
corresponding sibling_pte. A reverse map or a parent_pte points at
the head of this single linked list. So we can do cleanup for
reverse map and parent_pte VERY LARGELY.

If most rmap have only one entry or most sp have only one parent,
this patch may use more memory than before.

That is the case with NPT and EPT. Each page has exactly one spte (except a few vga pages), and each sp has exactly one parent_pte (except the root pages).

1) Reduce a lot of code, The functions which are in hot path becomes
very very simple and terrifically fast.
2) rmap_next(): O(N) -> O(1). traveling a ramp: O(N*N) -> O(N)

The existing rmap_next() is not O(N), it's O(RMAP_EXT), which is 4. The data structure was chosen over a simple linked list to avoid extra cache misses.

3) Remove the ugly interlayer: struct kvm_rmap_desc, struct kvm_pte_chain

kvm_rmap_desc and kvm_pte_chain are indeed ugly, but they do save a lot of memory and cache misses.

4) We don't need to allocate any thing when we change the mappings.
So we can avoid allocation when we have held kvm mmu spin lock.
(this feature is very helpful in future).
5) better readability.

I agree the new code is more readable. Unfortunately it uses more memory and is likely to be slower. You add a cache miss for every spte, while kvm_rmap_desc amortizes the cache miss among 4 sptes, and special cases 1 spte to have no cache misses (or extra memory requirements).

I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at