On 4/23/19 4:16 AM, Laurent Dufour wrote:
My only concern is the error path.
Calling arch_unmap() before handling any error case means that it will
have to be undo and there is no way to do so.
Is there a practical scenario where munmap() of the VDSO can split a
VMA? If the VDSO is guaranteed to be a single page, it would have to be
a scenario where munmap() was called on a range that included the VDSO
*and* other VMA that we failed to split.
But, the scenario would have to be that someone tried to munmap() the
VDSO and something adjacent, the munmap() failed, and they kept on using
the VDSO and expected the special signal and perf behavior to be maintained.
BTW, what keeps the VDSO from merging with an adjacent VMA? Is it just
the vm_ops->close that comes from special_mapping_vmops?
I don't know what is the rational to move arch_unmap() to the beginning
of __do_munmap() but the error paths must be managed.
It's in the changelog:
https://patchwork.kernel.org/patch/10909727/
But, the tl;dr version is: x86 is recursively calling __do_unmap() (via
arch_unmap()) in a spot where the internal rbtree data is inconsistent,
which causes all kinds of fun. If we move arch_unmap() to before
__do_munmap() does any data structure manipulation, the recursive call
doesn't get confused any more.
There are 2 assumptions here:
1. 'start' and 'end' are page aligned (this is guaranteed by __do_munmap().
2. the VDSO is 1 page (this is guaranteed by the union vdso_data_store on powerpc)
Are you sure about #2? The 'vdso64_pages' variable seems rather
unnecessary if the VDSO is only 1 page. ;)