David,
Basically I see it a potential way of moving memory efficiently especially
with thp.
It's an interesting use case indeed. The questions would be if this is (a) a
use case we want to support; (b) why we need to make that decision now and
add that feature.
I would like to support that if nothing stops it from happening, but that's
what we're discussing though..
For (b), I wanted to avoid UFFD_FEATURE_MOVE_CROSS_MM feature flag just for
this, if they're already so close, not to mention current code already
contains cross-mm support.
If to support that live upgrade use case, I'd probably need to rework tlb
flushing too to do the batching (actually tlb flush is not even needed for
upgrade scenario..). I'm not sure whether Lokesh's use case would move
large chunks, it'll be perfect if Suren did it altogether. But that one is
much easier if transparent to userapps. Cross-mm is not transparent and
need another feature knob, which I want to avoid if possible.
One question is if this kind of "moving memory between processes" really
should be done, because intuitively SHMEM smells like the right thing to use
here (two processes wanting to access the same memory).
That's the whole point, IMHO, where shmem cannot be used. As you said, on
when someone cannot use file memory for some reason like ksm.
The downsides of shmem are lack of the shared zeropage and KSM. The shared
zeropage is usually less of a concern for VMs, but KSM is. However, KSM will
also disallow moving pages here. But all non-deduplicated ones could be
moved.
[I wondered whether moving KSM pages (rmap items) could be done; probably in
some limited form with some more added complexity]
Yeah we can leave that complexity for later when really needed. Here
cross-mm support, OTOH, isn't making it so complicated, IMHO.
Btw, we don't even necessarily need to be able to migrate KSM pages for a
VM live upgrade use case: we can unmerge the pages, upgrade, and wait for
KSM to scan & merge again on the new binary / mmap. Userspace can have
that control easily, afaiu, via existing madvise().
single-mm should at least not cause harm, but the semantics are
questionable. cross-mm could, especially with malicious user space that
wants to find ways of harming the kernel.
For kernel, I think we're discussing to see whether it's safe to do so from
kernel pov, e.g., whether to exclude pinned pages is part of that.
For the user app, the dest process has provided the uffd descriptor on its
own willingness, or be a child of the UFFDIO_MOVE issuer when used with
EVENT_FORK, I assume that's already some form of safety check because it
cannot be any process, but ones that are proactively in close cooperative
with the issuer process.
I'll note that mremap with pinned pages works.
But that's not "by design", am I right? IOW, do we have any real pin user
that rely on mremap() allowing pages to be moved?
I don't see any word guarantee at least from man page that mremap() will
make sure the PFN won't change after the movement.. even though it seems to
be what happening now.
Neither do I think when designing MMF_HAS_PINNED we kept that in mind that
it won't be affected by someone mremap() pinned pages and we want to keep
it working.. >
All of it just seems to be an accident..
One step back, we're free to define UFFDIO_MOVE anyway, and we don't
necessarily need to always follow mremap(). E.g., mremap() also supports
ksm pages, but IIUC we already decide to not support that for now on
UFFDIO_MOVE. UFFDIO_MOVE seems all fine to make it clear on failing at
pinned pages from the 1st day if that satisfies our goals, too.