Re: [PATCHv2 3/5] x86/mm: fix native mmap() in compat bins and vice-versa

From: Dmitry Safonov
Date: Wed Jan 18 2017 - 06:38:41 EST


On 01/17/2017 11:29 PM, Andy Lutomirski wrote:
On Mon, Jan 16, 2017 at 4:33 AM, Dmitry Safonov <dsafonov@xxxxxxxxxxxxx> wrote:
Fix 32-bit compat_sys_mmap() mapping VMA over 4Gb in 64-bit binaries
and 64-bit sys_mmap() mapping VMA only under 4Gb in 32-bit binaries.
Changed arch_get_unmapped_area{,_topdown}() to recompute mmap_base
for those cases and use according high/low limits for vm_unmapped_area()
The recomputing of mmap_base may make compat sys_mmap() in 64-bit
binaries a little slower than native, which uses already known from exec
time mmap_base - but, as it returned buggy address, that case seemed
unused previously, so no performance degradation for already used ABI.

This looks plausibly correct but rather weird -- why does this code
need to distinguish between all four cases (pure 32-bit, pure 64-bit,
64-bit mmap layout doing 32-bit call, 32-bit layout doing 64-bit
call)?

Only by need to know is mm->mmap_base computed initialy for 32-bit
or for 64-bit.


Can be optimized in future by introducing mmap_compat_{,legacy}_base
in mm_struct.

Hmm. Would it make sense to do it this way from the beginning?

That would, but mm_struct is in generic code, if adding those new bases
is fine, than I'll do that in v3.

It will look somehow like:
: if (in_compat_syscall())
: return current->mm->mmap_compat_base;
: else
: return current->mm->mmap_base;


If adding an in_32bit_syscall() helper would help, then by all means
please do so.

--Andy



--
Dmitry