Re: [RFC PATCH v2 00/51] 1G page support for guest_memfd
From: Edgecombe, Rick P
Date: Wed Jul 09 2025 - 11:19:14 EST
On Wed, 2025-07-09 at 07:28 -0700, Vishal Annapurve wrote:
> I think we can simplify the role of guest_memfd in line with discussion [1]:
> 1) guest_memfd is a memory provider for userspace, KVM, IOMMU.
> - It allows fallocate to populate/deallocate memory
> 2) guest_memfd supports the notion of private/shared faults.
> 3) guest_memfd supports memory access control:
> - It allows shared faults from userspace, KVM, IOMMU
> - It allows private faults from KVM, IOMMU
> 4) guest_memfd supports changing access control on its ranges between
> shared/private.
> - It notifies the users to invalidate their mappings for the
> ranges getting converted/truncated.
KVM needs to know if a GFN is private/shared. I think it is also intended to now
be a repository for this information, right? Besides invalidations, it needs to
be queryable.
>
> Responsibilities that ideally should not be taken up by guest_memfd:
> 1) guest_memfd can not initiate pre-faulting on behalf of it's users.
> 2) guest_memfd should not be directly communicating with the
> underlying architecture layers.
> - All communication should go via KVM/IOMMU.
Maybe stronger, there should be generic gmem behaviors. Not any special
if (vm_type == tdx) type logic.
> 3) KVM should ideally associate the lifetime of backing
> pagetables/protection tables/RMP tables with the lifetime of the
> binding of memslots with guest_memfd.
> - Today KVM SNP logic ties RMP table entry lifetimes with how
> long the folios are mapped in guest_memfd, which I think should be
> revisited.
I don't understand the problem. KVM needs to respond to user accessible
invalidations, but how long it keeps other resources around could be useful for
various optimizations. Like deferring work to a work queue or something.
I think it would help to just target the ackerly series goals. We should get
that code into shape and this kind of stuff will fall out of it.
>
> Some very early thoughts on how guest_memfd could be laid out for the long term:
> 1) guest_memfd code ideally should be built-in to the kernel.
> 2) guest_memfd instances should still be created using KVM IOCTLs that
> carry specific capabilities/restrictions for its users based on the
> backing VM/arch.
> 3) Any outgoing communication from guest_memfd to it's users like
> userspace/KVM/IOMMU should be via notifiers to invalidate similar to
> how MMU notifiers work.
> 4) KVM and IOMMU can implement intermediate layers to handle
> interaction with guest_memfd.
> - e.g. there could be a layer within kvm that handles:
> - creating guest_memfd files and associating a
> kvm_gmem_context with those files.
> - memslot binding
> - kvm_gmem_context will be used to bind kvm
> memslots with the context ranges.
> - invalidate notifier handling
> - kvm_gmem_context will be used to intercept
> guest_memfd callbacks and
> translate them to the right GPA ranges.
> - linking
> - kvm_gmem_context can be linked to different
> KVM instances.
We can probably look at the code to decide these.
>
> This line of thinking can allow cleaner separation between
> guest_memfd/KVM/IOMMU [2].
>
> [1] https://lore.kernel.org/lkml/CAGtprH-+gPN8J_RaEit=M_ErHWTmFHeCipC6viT6PHhG3ELg6A@xxxxxxxxxxxxxx/#t
> [2] https://lore.kernel.org/lkml/31beeed3-b1be-439b-8a5b-db8c06dadc30@xxxxxxx/
>
>
>
> >
> > [*] https://lore.kernel.org/all/ZOO782YGRY0YMuPu@xxxxxxxxxx
> >
> > > [0] https://lore.kernel.org/all/cover.1747368092.git.afranji@xxxxxxxxxx/
> > > https://lore.kernel.org/kvm/cover.1749672978.git.afranji@xxxxxxxxxx/