Re: [PATCH v4 06/10] KVM: MMU: fast path of handling guest page fault
From: Marcelo Tosatti
Date: Sat May 05 2012 - 10:13:48 EST
On Thu, May 03, 2012 at 07:26:38PM +0800, Xiao Guangrong wrote:
> On 05/03/2012 05:07 AM, Marcelo Tosatti wrote:
>
>
> >> 'entry' is not a problem since it is from atomically read-write as
> >> mentioned above, i need change this code to:
> >>
> >> /*
> >> * Optimization: for pte sync, if spte was writable the hash
> >> * lookup is unnecessary (and expensive). Write protection
> >> * is responsibility of mmu_get_page / kvm_sync_page.
> >> * Same reasoning can be applied to dirty page accounting.
> >> */
> >> if (!can_unsync && is_writable_pte(entry) /* Use 'entry' instead of '*sptep'. */
> >> goto set_pte
> >> ......
> >>
> >>
> >> if (is_writable_pte(entry) && !is_writable_pte(spte)) /* Use 'spte' instead of '*sptep'. */
> >> kvm_flush_remote_tlbs(vcpu->kvm);
> >
> > What is of more importance than the ability to verify that this or that
> > particular case are ok at the moment is to write code in such a way that
> > its easy to verify that it is correct.
> >
> > Thus the suggestion above:
> >
> > "scattered all over (as mentioned before, i think a pattern of read spte
> > once, work on top of that, atomically write and then deal with results
> > _everywhere_ (where mmu lock is held) is more consistent."
> >
>
>
> Marcelo, thanks for your time to patiently review/reply my mail.
>
> I am confused with ' _everywhere_ ', it means all of the path read/update
> spte? why not only verify the path which depends on is_writable_pte()?
I meant any path that updates from present->present.
> For the reason of "its easy to verify that it is correct"? But these
> paths are safe since it is not care PT_WRITABLE_MASK at all. What these
> paths care is the Dirty-bit and Accessed-bit are not lost, that is why
> we always treat the spte as "volatile" if it is can be updated out of
> mmu-lock.
>
> For the further development? We can add the delta comment for
> is_writable_pte() to warn the developer use it more carefully.
>
> It is also very hard to verify spte everywhere. :(
>
> Actually, the current code to care PT_WRITABLE_MASK is just for
> tlb flush, may be we can fold it into mmu_spte_update.
> [
> There are tree ways to modify spte, present -> nonpresent, nonpresent -> present,
> present -> present.
>
> But we only need care present -> present for lockless.
> ]
Also need to take memory ordering into account, which was not an issue
before. So it is not only TLB flush.
> /*
> * return true means we need flush tlbs caused by changing spte from writeable
> * to read-only.
> */
> bool mmu_update_spte(u64 *sptep, u64 spte)
> {
> u64 last_spte, old_spte = *sptep;
> bool flush = false;
>
> last_spte = xchg(sptep, spte);
>
> if ((is_writable_pte(last_spte) ||
> spte_has_updated_lockless(old_spte, last_spte)) &&
> !is_writable_pte(spte) )
> flush = true;
>
> .... track Drity/Accessed bit ...
>
>
> return flush
> }
>
> Furthermore, the style of "if (spte-has-changed) goto beginning" is feasible
> in set_spte since this path is a fast path. (i can speed up mmu_need_write_protect)
What you mean exactly?
It would be better if all these complications introduced by lockless
updates can be avoided, say using A/D bits as Avi suggested.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/