Re: [PATCHv2 3/5] x86/mm: fix native mmap() in compat bins and vice-versa
From: Andy Lutomirski
Date: Tue Jan 17 2017 - 15:29:58 EST
On Mon, Jan 16, 2017 at 4:33 AM, Dmitry Safonov <dsafonov@xxxxxxxxxxxxx> wrote:
> Fix 32-bit compat_sys_mmap() mapping VMA over 4Gb in 64-bit binaries
> and 64-bit sys_mmap() mapping VMA only under 4Gb in 32-bit binaries.
> Changed arch_get_unmapped_area{,_topdown}() to recompute mmap_base
> for those cases and use according high/low limits for vm_unmapped_area()
> The recomputing of mmap_base may make compat sys_mmap() in 64-bit
> binaries a little slower than native, which uses already known from exec
> time mmap_base - but, as it returned buggy address, that case seemed
> unused previously, so no performance degradation for already used ABI.
This looks plausibly correct but rather weird -- why does this code
need to distinguish between all four cases (pure 32-bit, pure 64-bit,
64-bit mmap layout doing 32-bit call, 32-bit layout doing 64-bit
call)?
> Can be optimized in future by introducing mmap_compat_{,legacy}_base
> in mm_struct.
Hmm. Would it make sense to do it this way from the beginning?
If adding an in_32bit_syscall() helper would help, then by all means
please do so.
--Andy