The goal of this RFC is to try and align KVM, mm, and anyone else with skin in the
game, on an acceptable direction for supporting guest private memory, e.g. for
Intel's TDX. The TDX architectural effectively allows KVM guests to crash the
host if guest private memory is accessible to host userspace, and thus does not
play nice with KVM's existing approach of pulling the pfn and mapping level from
the host page tables.
This is by no means a complete patch; it's a rough sketch of the KVM changes that
would be needed. The kernel side of things is completely omitted from the patch;
the design concept is below.
There's also fair bit of hand waving on implementation details that shouldn't
fundamentally change the overall ABI, e.g. how the backing store will ensure
there are no mappings when "converting" to guest private.
Background
==========
This is a loose continuation of Kirill's RFC[*] to support TDX guest private
memory by tracking guest memory at the 'struct page' level. This proposal is the
result of several offline discussions that were prompted by Andy Lutomirksi's
concerns with tracking via 'struct page':
1. The kernel wouldn't easily be able to enforce a 1:1 page:guest association,
let alone a 1:1 pfn:gfn mapping.
2. Does not work for memory that isn't backed by 'struct page', e.g. if devices
gain support for exposing encrypted memory regions to guests.
3. Does not help march toward page migration or swap support (though it doesn't
hurt either).
[*] https://lkml.kernel.org/r/20210416154106.23721-1-kirill.shutemov@xxxxxxxxxxxxxxx
Concept
=======
Guest private memory must be backed by an "enlightened" file descriptor, where
"enlightened" means the implementing subsystem supports a one-way "conversion" to
guest private memory and provides bi-directional hooks to communicate directly
with KVM. Creating a private fd doesn't necessarily have to be a conversion, e.g. it
could also be a flag provided at file creation, a property of the file system itself,
etc...
Before a private fd can be mapped into a KVM guest, it must be paired 1:1 with a
KVM guest, i.e. multiple guests cannot share a fd. At pairing, KVM and the fd's
subsystem exchange a set of function pointers to allow KVM to call into the subsystem,
e.g. to translate gfn->pfn, and vice versa to allow the subsystem to call into KVM,
e.g. to invalidate/move/swap a gfn range.
Mapping a private fd in host userspace is disallowed, i.e. there is never a host
virtual address associated with the fd and thus no userspace page tables pointing
at the private memory.
Pinning _from KVM_ is not required. If the backing store supports page migration
and/or swap, it can query the KVM-provided function pointers to see if KVM supports
the operation. If the operation is not supported (this will be the case initially
in KVM), the backing store is responsible for ensuring correct functionality.
Unmapping guest memory, e.g. to prevent use-after-free, is handled via a callback
from the backing store to KVM. KVM will employ techniques similar to those it uses
for mmu_notifiers to ensure the guest cannot access freed memory.
A key point is that, unlike similar failed proposals of the past, e.g. /dev/mktme,
existing backing stores can be englightened, a from-scratch implementations is not
required (though would obviously be possible as well).
One idea for extending existing backing stores, e.g. HugeTLBFS and tmpfs, is
to add F_SEAL_GUEST, which would convert the entire file to guest private memory
and either fail if the current size is non-zero or truncate the size to zero.
KVM
===
Guest private memory is managed as a new address space, i.e. as a different set of
memslots, similar to how KVM has a separate memory view for when a guest vCPU is
executing in virtual SMM. SMM is mutually exclusive with guest private memory.
The fd (the actual integer) is provided to KVM when a private memslot is added
via KVM_SET_USER_MEMORY_REGION. This is when the aforementioned pairing occurs.
By default, KVM memslot lookups will be "shared", only specific touchpoints will
be modified to work with private memslots, e.g. guest page faults. All host
accesses to guest memory, e.g. for emulation, will thus look for shared memory
and naturally fail without attempting copy_to/from_user() if the guest attempts
to coerce KVM into access private memory. Note, avoiding copy_to/from_user() and
friends isn't strictly necessary, it's more of a happy side effect.
A new KVM exit reason, e.g. KVM_EXIT_MEMORY_ERROR, and data struct in vcpu->run
is added to propagate illegal accesses (see above) and implicit conversions
to userspace (see below). Note, the new exit reason + struct can also be to
support several other feature requests in KVM[1][2].
The guest may explicitly or implicity request KVM to map a shared/private variant
of a GFN. An explicit map request is done via hypercall (out of scope for this
proposal as both TDX and SNP ABIs define such a hypercall). An implicit map request
is triggered simply by the guest accessing the shared/private variant, which KVM
sees as a guest page fault (EPT violation or #NPF). Ideally only explicit requests
would be supported, but neither TDX nor SNP require this in their guest<->host ABIs.
For implicit or explicit mappings, if a memslot is found that fully covers the
requested range (which is a single gfn for implicit mappings), KVM's normal guest
page fault handling works with minimal modification.
If a memslot is not found, for explicit mappings, KVM will exit to userspace with
the aforementioned dedicated exit reason. For implict _private_ mappings, KVM will
also immediately exit with the same dedicated reason. For implicit shared mappings,
an additional check is required to differentiate between emulated MMIO and an
implicit private->shared conversion[*]. If there is an existing private memslot
for the gfn, KVM will exit to userspace, otherwise KVM will treat the access as an
emulated MMIO access and handle the page fault accordingly.
Punching Holes
==============
The expected userspace memory model is that mapping requests will be handled as
conversions, e.g. on a shared mapping request, first unmap the private gfn range,
then map the shared gfn range. A new KVM ioctl() will likely be needed to allow
userspace to punch a hole in a memslot, as expressing such an operation isn't
possible with KVM_SET_USER_MEMORY_REGION. While userspace could delete the
memslot, then recreate three new memslots, doing so would be destructive to guest
data as unmapping guest private memory (from the EPT/NPT tables) is destructive
to the data for both TDX and SEV-SNP guests.
Pros (vs. struct page)
======================
Easy to enforce 1:1 fd:guest pairing, as well as 1:1 gfn:pfn mapping.
Userspace page tables are not populated, e.g. reduced memory footprint, lower
probability of making private memory accessible to userspace.
Provides line of sight to supporting page migration and swap.
Provides line of sight to mapping MMIO pages into guest private memory.
Cons (vs. struct page)
======================
Significantly more churn in KVM, e.g. to plumb 'private' through where needed,
support memslot hole punching, etc...
KVM's MMU gets another method of retrieving host pfn and page size.
Requires enabling in every backing store that someone wants to support.
Because the NUMA APIs work on virtual addresses, new syscalls fmove_pages(),
fbind(), etc... would be required to provide equivalents to existing NUMA
functionality (though those syscalls would likely be useful irrespective of guest
private memory).
Washes (vs. struct page)
========================
A misbehaving guest that triggers a large number of shared memory mappings will
consume a large number of memslots. But, this is likely a wash as similar effect
would happen with VMAs in the struct page approach.