Re: Can we drop upstream Linux x32 support?
From: Catalin Marinas
Date: Tue Dec 11 2018 - 06:32:39 EST
On Tue, Dec 11, 2018 at 10:02:45AM +0100, Arnd Bergmann wrote:
> On Tue, Dec 11, 2018 at 6:35 AM Andy Lutomirski <luto@xxxxxxxxxx> wrote:
> > I tried to understand what's going on. As far as I can tell, most of
> > the magic is the fact that __kernel_long_t and __kernel_ulong_t are
> > 64-bit as seen by x32 user code. This means that a decent number of
> > uapi structures are the same on x32 and x86_64. Syscalls that only
> > use structures like this should route to the x86_64 entry points. But
> > the implementation is still highly dubious -- in_compat_syscall() will
> > be *true* in such system calls,
>
> I think the fundamental issue was that the intention had always been
> to use only the 64-bit entry points for system calls, but the most
> complex one we have -- ioctl() -- has to use the compat entry point
> because device drivers define their own data structures using 'long'
> and pointer members and they need translation, as well as
> matching in_compat_syscall() checks. This in turn breaks down
> again whenever a driver defines an ioctl command that takes
> a __kernel_long_t or a derived type like timespec as its argument.
With arm64 ILP32 we tried to avoid the ioctl() problem by having
__kernel_long_t 32-bit, IOW mimicking the arm32 ABI (compat). The
biggest pain point is signals where the state is completely different
from arm32 (more, wider registers) and can't be dealt with by the compat
layer.
Fortunately, we haven't merge it yet as we have the same dilemma about
real users and who's going to regularly test the ABI in the long run. In
the meantime, watching this thread with interest ;).
--
Catalin