Yeah, and that needs time and you have to motivate me :)
Beer? ;)
Well the motivator for the initial investigation was rppt playing with
R[WO]X (this came from an #mm irc conversation), however in his case he
will be mapping pages between the two.
And that's the scenario I think we care about in practice (actually
accessing memory).
In real-use scenarios, yes fuzzers are a thing, but what comes to mind more
immediately is a process that maps a big chunk of virtual memory PROT_NONE
and uses that as part of an internal allocator.
If the process then allocates memory from this chunk (mprotect() ->
PROT_READ | PROT_WRITE), which then gets freed without being used
(mprotect() -> PROT_NONE) we hit the issue. For OVERCOMMIT_NEVER this could
become quite an issue more so than the VMA fragmentation.
Using mprotect() when allocating/freeing memory in an allocator is already
horribly harmful for performance (well, and the #VMAs), so I don't think
that scenario is relevant in practice.
Chrome for instance maintains vast memory ranges as PROT_NONE. I've not dug
into what they're doing, but surely to make use of them they'd need to
mprotect() or mmap()/mremap() (which maybe is what the intent is)
But fair point. However I can't imagine m[re]map'ing like this would be
cheap either, as you're doing the same kind of expensive operations, so the
general _approach_ seems like it's used in some way in practice.
What some allocators (iirc even glibc) do is reserve a bigger area with
PROT_NONE and grow the accessible part slowly on demand, discarding freed
memory using MADV_DONTNEED. So you essentially end up with two VMAs -- one
completely accessible, one completely inaccessible.
They don't use mprotect() because:
(a) It's bad for performance
(b) It might increase the #VMAs
There is efence, but I remember it simply does mmap()+munmap() and runs into
VMA limits easily just by relying on a lot of mappings.
In addition, I think a user simply doing the artificial test above would
find the split remaining quite confusing, and somebody debugging some code
like this would equally wonder why it happened, so there is benefit in
clarity too (they of course observing the VMA fragmentation from the
perspective of /proc/$pid/[s]maps).
My answer would have been "memory gets commited the first time we allow
write access, and that wasn't the case for all memory in that range".
Now, take your example above and touch the memory.
ptr = mmap(NULL, page_size * 3, PROT_READ, MAP_ANON | MAP_PRIVATE, -1, 0);
mprotect(ptr + page_size, page_size, PROT_READ | PROT_WRITE);
*(ptr + page_size) = 1;
mprotect(ptr + page_size, page_size, PROT_READ);
And we'll not merge the VMAs.
Which, at least to me, makes existing handling more consistent.
Indeed, but I don't think it's currently consistent at all.
The 'correct' solution would be to:-
1. account for the block when it becomes writable
2. unaccount for any pages not used when it becomes unwritable
However since we can't go from vma -> folios for anon pages without some
extreme effort this is not feasible.
Therefore the existing code hacks it and just keep things accountable.
The patch reduces the hacking so we get halfway to the correct approach.
So before: "if you ever make this read/write, we account it forever"
After: "if you ever make this read/write and USE IT, we account it forever"
To me it is more consistent. Of course this is subjective...You made the conditional more complicated to make it consistent, won't argue with that :)
And users could rightfully wonder "why isn't it getting merged". And the
answer would be the same: "memory gets commited the first time we allow
write access, and that wasn't the case for all memory in that range".
Yes indeed, a bigger answer is that we don't have fine-grained accounting
for pages for anon_vma.
So in practice programs will likely do the PROT_WRITE in order to actually
populate the area, so this won't trigger as I commented above. But it can
still help in some cases and is cheap to do, so:
IMHO we should much rather look into getting hugetlb ranges merged. Mt
recollection is that we'll never end up merging hugetlb VMAs once split.
I'm not sure how that's relevant to fragmented non-hugetlb VMAs though?
It's a VMA merging issue that can be hit in practice, so I raised it.
No strong opinion from my side, just my 2 cents reading the patch
description and wondering "why do we even invest time thinking about this
case" -- and eventually make handling less consistent IMHO (see above).
Hmm it seems ilke you have quite a strong opinion :P but this is why I cc-d
you, as you are a great scrutiniser.
Yeah, the time investment was just by accident, the patch was originally a
throwaway thing to prove the point :]
I very much appreciate your time though! And I owe you at least one beer now.
I would ask that while you might question the value, whether you think it
so harmful as not to go in, so Andrew can know whether this debate = don't
take?
An Ack-with-meh would be fine. But also if you want to nak, it's also
fine. I will buy you the beer either way ;)