Re: [PATCH resend v2 2/5] mm/madvise: introduce MADV_POPULATE_(READ|WRITE) to prefault page tables
From: David Hildenbrand
Date: Fri May 21 2021 - 04:49:07 EST
[...]
Anyhow, please suggest a way to handle it via a single flag in the kernel --
which would be some kind of heuristic as we know from MAP_POPULATE. Having
an alternative at hand would make it easier to discuss this topic further. I
certainly *don't* want MAP_POPULATE semantics when it comes to
MADV_POPULATE, especially when it comes to shared mappings. Not useful in
QEMU now and in the future.
OK, this point is still not entirely clear to me. Elsewhere you are
saying that QEMU cannot use MAP_POPULATE because it ignores errors
and also it doesn't support sparse mappings because they apply to the
whole mmap. These are all clear but it is less clear to me why the same
semantic is not applicable for QEMU when used through madvise interface
which can handle both of those.
It's a combination of things:
a) MAP_POPULATE never was an option simply because of deferred
"prealloc=on" handling in QEMU, happening way after we created the
memmap. Further it doesn't report if there was an error, which is
another reason why it's basically useless for QEMU use cases.
b) QEMU uses manual read-write prefaulting for "preallocation", for
example, to avoid SIGBUS on hugetlbfs or shmem at runtime. There are
cases where we absolutely want to avoid crashing the VM later just
because of a user error. MAP_POPULATE does *not* do what we want for
shared mappings, because it triggers a read fault.
c) QEMU uses the same mechanism for prefaulting in RT environments,
where we want to avoid any kind of pagefault, using mlock() etc.
d) MAP_POPULATE does not apply to sparse memory mappings that I'll be
using more heavily in QEMU, also for the purpose of preallocation
with virtio-mem.
See the current QEMU code along with a comment in
https://github.com/qemu/qemu/blob/972e848b53970d12cb2ca64687ef8ff797fb6236/util/oslib-posix.c#L496
it's especially bad for PMEM ("wear on the storage backing"), which is
why we have to trust on users not to trigger preallocation/prefaulting
on PMEM, otherwise (as already expressed via bug reports) we waste a lot
of time when backing VMs on PMEM or forwarding NVDIMMs, unnecessarily
read/writing (slow) DAX.
Do I get it right that you really want to emulate the full fledged write
fault to a) limit another write fault when the content is actually
modified and b) prevent from potential errors during the write fault
(e.g. mkwrite failing on the fs data)?
Yes, for the use case of "preallocation" in QEMU. See the QEMU link.
But again, the thing that makes it more complicated is that I can come
up with some use cases that want to handle "shared mappings of ordinary
files" a little better. Or the usefaultfd-wp example I gave, where
prefaulting via MADV_POPULATE_READ can roughly half the population time.
We could make MADV_POPULATE act depending on the readability/writability of
a mapping. Use MADV_POPULATE_WRITE on writable mappings, use
MADV_POPULATE_READ on readable mappings. Certainly not perfect for use cases
where you have writable mappings that are mostly read only (as in the
example with fake-NVDIMMs I gave ...), but if it makes people happy, fine
with me. I mostly care about MADV_POPULATE_WRITE.
Yes, this is where my thinking was going as well. Essentially define
MADV_POPULATE as "Populate the mapping with the memory based on the
mapping access." This looks like a straightforward semantic to me and it
doesn't really require any deep knowledge of internals.
Now, I was trying to compare which of those would be more tricky to
understand and use and TBH I am not really convinced any of the two is
much better. Separate READ/WRITE modes are explicit which can be good
but it will require quite an advanced knowledge of the #PF behavior.
On the other hand MADV_POPULATE would require some tricks like mmap,
madvise and mprotect(to change to writable) when the data is really
written to. I am not sure how much of a deal this would be for QEMU for
example.
IIRC, at the time we enable background snapshotting, the VM is running
and we cannot temporarily mprotect(PROT_READ) without making the guest
crash. But again, uffd-wp handling is somewhat a special case because
the implementation in the kernel is really suboptimal.
The reason I chose MADV_POPULATE_READ + MADV_POPULATE_WRITE is because
it really mimics what user space currently does to get the job done.
I guess the important part to document is that "be careful when using
MADV_POPULATE_READ because it might just populate the shared zeropage"
and "be careful with MADV_POPULATE_WRITE because it will do the same as
when writing to every page: dirty the pages such that they will have to
be written back when backed by actual files".
The current MAN page entry for MADV_POPULATE_READ reads:
"
Populate (prefault) page tables readable for the whole range without
actually reading. Depending on the underlying mapping, map the shared
zeropage, preallocate memory or read the underlying file. Do not
generate SIGBUS when populating fails, return an error instead.
If MADV_POPULATE_READ succeeds, all page tables have been populated
(prefaulted) readable once. If MADV_POPULATE_READ fails, some page
tables might have been populated.
MADV_POPULATE_READ cannot be applied to mappings without read
permissions and special mappings marked with the kernel-internal
VM_PFNMAP and VM_IO.
Note that with MADV_POPULATE_READ, the process can still be killed at
any moment when the system runs out of memory.
"
So, all that being said, I am not really sure. I am not really happy
about READ/WRITE split but if a simpler interface is going to be a bad
fit for existing usecases then I believe a proper way to go is the
document the more complex interface thoroughly.
I think with the split we are better off long term without requiring
workarounds (mprotect()) to make some use cases work in the long term.
But again, if there is a good justification why a single MADV_POPULATE
make sense, I'm happy to change it. Again, for me, the most important
thing long-term is MADV_POPULATE_WRITE because that's really what QEMU
mainly uses right now for preallocation. But I can see use cases for
MADV_POPULATE_READ as well.
Thanks for your input!
--
Thanks,
David / dhildenb