Re: [PATCH 0/18][RFC] Nested Paging support for Nested SVM (aka NPT-Virtualization)

From: Avi Kivity
Date: Fri Mar 12 2010 - 02:36:57 EST


On 03/11/2010 10:58 PM, Marcelo Tosatti wrote:

Can't you translate l2_gpa -> l1_gpa walking the current l1 nested
pagetable, and pass that to the kvm tdp fault path (with the correct
context setup)?
If I understand your suggestion correctly, I think thats exactly whats
done in the patches. Some words about the design:

For nested-nested we need to shadow the l1-nested-ptable on the host.
This is done using the vcpu->arch.mmu context which holds the l1 paging
modes while the l2 is running. On a npt-fault from the l2 we just
instrument the shadow-ptable code. This is the common case. because it
happens all the time while the l2 is running.
OK, makes sense now, I was missing the fact that the l1-nested-ptable
needs to be shadowed and l1 translations to it must be write protected.

Shadow converts (gva -> gpa -> hpa) to (gva -> hpa) or (ngpa -> gpa -> hpa) to (ngpa -> hpa) equally well. In the second case npt still does (ngva -> ngpa).

You should disable out of sync shadow so that l1 guest writes to
l1-nested-ptables always trap.

Why? The guest is under obligation to flush the tlb if it writes to a page table, and we will resync on that tlb flush.

Unsync makes just as much sense for nnpt. Think of khugepaged in the guest eating a page table and spitting out a PDE.

And in the trap case, you'd have to
invalidate l2 shadow pagetable entries that used the (now obsolete)
l1-nested-ptable entry. Does that happen automatically?

What do you mean by 'l2 shadow ptable entries'? There are the guest's page tables (ordinary direct mapped, unless the guest's guest is also running an npt-enabled hypervisor), and the host page tables. When the guest writes to each page table, we invalidate the shadows.

--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/