Avi Kivity wrote:
On 06/15/2010 05:46 AM, Xiao Guangrong wrote:I think you are right, this path only prefetch valid/pte.A=1 mapping.
Hi Avi, Marcelo,Once the guest has faulted in all memory, we shouldn't see much
This patchset support pte prefetch when intercepted guest #PF,
the aim is to reduce guest #PF which can be intercepted by VMM.
If we meet any failure in the prefetch path, we will exit it
and not try other ptes to avoid become heavy path.
During my performance test, under EPT enabled case, unixbench
shows the performance improved ~1.2%,
improvement, yes?
Yes, prefetch is not alway success, but the prefetch path is fast, it not costuser EPT disable case,I'm a little worried about this. In some workloads, prefetch can often
unixbench shows the performance improved ~3.6%
fail due to gpte.a=0 so we spend effort doing nothing.
much time, at the worst case, only 128 bytes we need read form guest pte. Once
it's successful, much overload can be reduce.
We should map those pages with pte.a=pte.d=0 so we don't confuse hostYou are right, this is the speculative path.
memory management. On EPT (which lacks a/d bits) we can't enable it
(but we can on NPT).
For the pte.A bit:
we called mmu_set_spte() with speculative = true, so we set pte.a = 0 in this
path.
For the pte.D bit:
We should fix also set pte.d = 0 in speculative path, the same problem is in
invlpg/pte write path, will do it in the next version.