Re: [RFC PATCH v3 00/10] Add support for shared PTEs across processes

From: David Hildenbrand
Date: Mon Oct 07 2024 - 04:44:43 EST


On 02.10.24 19:35, Dave Hansen wrote:
We were just chatting about this on David Rientjes's MM alignment call.

Unfortunately I was not able to attend this time, my body decided it's a good idea to stay in bed for a couple of days.

I thought I'd try to give a little brain

Let's start by thinking about KVM and secondary MMUs. KVM has a primary
mm: the QEMU (or whatever) process mm. The virtualization (EPT/NPT)
tables get entries that effectively mirror the primary mm page tables
and constitute a secondary MMU. If the primary page tables change,
mmu_notifiers ensure that the changes get reflected into the
virtualization tables and also that the virtualization paging structure
caches are flushed.

msharefs is doing something very similar. But, in the msharefs case,
the secondary MMUs are actually normal CPU MMUs. The page tables are
normal old page tables and the caches are the normal old TLB. That's
what makes it so confusing: we have lots of infrastructure for dealing
with that "stuff" (CPU page tables and TLB), but msharefs has
short-circuited the infrastructure and it doesn't work any more.

It's quite different IMHO, to a degree that I believe they are different beasts:

Secondary MMUs:
* "Belongs" to same MM context and the primary MMU (process page tables)
* Maintains separate tables/PTEs, in completely separate page table
hierarchy
* Notifiers make sure the secondary structure stays in sync (update
PTEs, flush TLB)

mshare:
* Possibly mapped by many different MMs. Likely nothing stops us from
having on MM map multiple different mshare fds/
* Updating the PTEs directly affects all other MM page table structures
(and possibly any secondary MMUs! scary)


I better not think about the complexity of seconary MMUs + mshare (e.g., KVM with mshare in guest memory): MMU notifiers for all MMs must be called ...



Basically, I think it makes a lot of sense to check what KVM (or another
mmu_notifier user) is doing and make sure that msharefs is following its
lead. For instance, KVM _should_ have the exact same "page free"
flushing issue where it gets the MMU notifier call but the page may
still be in the secondary MMU. I _think_ KVM fixes it with an extra
page refcount that it takes when it first walks the primary page tables.

But the short of it is that the msharefs host mm represents a "secondary
MMU". I don't think it is really that special of an MMU other than the
fact that it has an mm_struct.

Not sure I agree ... IMHO these are two orthogonal things. Unless we want MMU notifiers to "update" MM primary MMUs (there is not really anything to update ...), but not sure if that is what we are looking for.

What you note about TLB flushing in the other mail makes sense, not sure how this interacts with any secondary MMUs ....

--
Cheers,

David / dhildenb