Re: [RFC PATCH v2 1/2] mm/userfaultfd: fix memory corruption due to writeprotect
From: Andrea Arcangeli
Date: Tue Jan 05 2021 - 16:10:47 EST
On Tue, Jan 05, 2021 at 08:06:22PM +0000, Nadav Amit wrote:
> I just thought that there might be some insinuation, as you mentioned VMware
> by name. My response was half-jokingly and should have had a smiley to
> prevent you from wasting your time on the explanation.
No problem, actually I appreciate you pointed out to give me the extra
opportunity to further clarify I wasn't implying anything like that,
sorry again for any confusion I may have generated.
I mentioned vmware because I'd be shocked if for the whole duration of
the wrprotect on the guest physical memory it'd have to halt all minor
faults and all memory freeing like it would happen to rust-vmm/qemu if
we take the mmap_write_lock, that's all. Or am I wrong about this?
For uffd-wp avoiding the mmap_write_lock isn't an immediate concern
(obviously so in the rust-vmm case which won't even do postcopy live
migration), but the above concern applies for the long term and maybe
mid term for qemu.
The postcopy live snapshoitting was the #1 use case so it's hard not
to mention it, but there's still other interesting userland use cases
of uffd-wp with various users already testing it in their apps, that
may ultimately become more prevalent, who knows.
The point is that those that will experiment with uffd-wp will run a
benchmark, post a blog, others will see the blog, they will test too
in their app and post their blog. It needs to deliver the full
acceleration immediately, otherwise the evaluation may show it as a
fail or not worth it.
In theory we could just say, we'll optimize it later if significant
userbase emerge, but in my view it's bit of a chicken egg problem and
I'm afraid that such theory may not work well in practice.
Still, for the initial fix, avoiding the mmap_write_lock seems more
important actually for clear_refs than for uffd-wp. uffd-wp is
somewhat lucky and will just share any solution to keep clear_refs
scalable, since the issue is identical.
Thanks,
Andrea