Re: [regression] x86/signal/64: Fix SS handling for signals delivered to 64-bit programs breaks dosemu

From: Stas Sergeev
Date: Thu Aug 13 2015 - 12:04:08 EST


13.08.2015 18:38, Andy Lutomirski ÐÐÑÐÑ:
On Thu, Aug 13, 2015 at 8:22 AM, Stas Sergeev <stsp@xxxxxxx> wrote:
13.08.2015 17:58, Andy Lutomirski ÐÐÑÐÑ:

On Thu, Aug 13, 2015 at 5:44 AM, Stas Sergeev <stsp@xxxxxxx> wrote:
13.08.2015 11:39, Ingo Molnar ÐÐÑÐÑ:
* Andy Lutomirski <luto@xxxxxxxxxxxxxx> wrote:


OK.
I'll try to test the patch tomorrow, but I think the sigreturn()'s
capability detection is still needed to easily replace the iret
trampoline
in userspace (without generating a signal and testing by hands).
Can of course be done with a run-time kernel version check...
That feature is so specialized that I think you should just probe it.

void foo(...) {
sigcontext->ss = 7;
}

modify_ldt(initialize descriptor 0);
sigaction(SIGUSR1, foo, SA_SIGINFO);
if (ss == 7)
yay;

Fortunately, all kernels that restore ss also have espfix64, so you
don't need to worry about esp[31:16] corruption on those kernels
either.

I suppose we could add a new uc_flag to indicate that ss is saved and
restored,
though. Ingo, hpa: any thoughts on that? There will always be some
kernel
versions that save and restore ss but don't set the flag, though.
So this new flag would essentially be a 'the ss save/restore bug is
fixed
for
sure' flag, not covering old kernels that happen to have the correct
behavior,
right?

Could you please map out the range of kernel versions involved - which
ones:

- 'never do the right thing'
- 'do the right thing sometimes'
- 'do the right thing always, but by accident'
- 'do the right thing always and intentionally'

?

I'd hate to complicate a legacy ABI any more. My gut feeling is to let
apps either
assume that the kernel works right, or probe the actual behavior. Adding
the flag
just makes it easy to screw certain kernel versions that would still
work
fine if
the app used actual probing. So I don't see the flag as an improvement.

If your patch fixes the regression that would be a good first step.
I've tested the patch.
It doesn't fix the problem.
It allows dosemu to save the ss the old way, but,
because dosemu doesn't save it to the sigreturn()'s-expected
place (sigcontext.__pad0), it crashes on sigreturn().

So the problem can't be fixed this way --> NACK to the patch.

I may be unavailable for further testings till next week.
I'm still fighting with getting DOSEMU to run at all in my VM.

I must be missing something. What ends up in ss/__pad0? Wouldn't it
contain whatever signal delivery put there (i.e. some valid ss value)?
The crash happens when DOS program terminates.
At that point dosemu subverts the execution flow by
replacing segregs and cs/ip ss/sp in sigcontext with its own.
But __pad0 still has DOS SS, which crash because (presumably)
the DOS LDT have been just removed.
That's unfortunate.

I don't really know what to do about this. My stupid heuristic for
signal delivery seems unlikely to cause problems, but I'm not coming
up with a great heuristic for detecting when a program that *modifies*
sigcontext hasn't set all the fields. Even adding a flag won't really
help here, since DOSEMU won't know to manipulate the flag.

Ingo, here's the situation, assuming I remember the versions right:

v4.0 and before: If we try to deliver a signal while SS is bad, we
fail and the process dies. If SS is good but nonstandard, we end up
in the signal handler with whatever SS value was loaded when the
signal was sent. We do *not* put SS anywhere in the sigcontext, so
the only way for a program to figure out what SS was is to look at the
HW state before making any syscalls. We also don't even try to
restore SS, so SS is unconditionally set to __USER_DS, necessitating
nasty workarounds (and breaking all kinds of test cases).

v4.1 and current -linus: We always set SS to __USER_DS when delivering
a signal. We save the old SS in the sigcontext and restore it, just
like 32-bit signals.

My patch: We leave SS alone when delivering a signal, unless it's
invalid, in which case we replace it with __USER_DS. We still save
the old SS in the sigcontext and restore it on return.

Apparently the remaining regression is that DOSEMU doesn't realize
that SS is saved so, when it tries to return to full 64-bit mode after
a signal that hit in 16-bit mode, it fails because it's invalidated
the old SS descriptor in the mean time.


So... what do we do about it? We could revert the whole mess. We
could tell everyone to fix their DOSEMU, which violates policy and is
especially annoying given how much effort we've put into keeping
16-bit mode fully functional lately. We could add yet more heuristics
and teach sigreturn to ignore the saved SS value in sigcontext if the
saved CS is 64-bit and the saved SS is unusable.
Andy, why do you constantly ignore the proposal to make
new behaviour explicitly controlable? You don't have to agree
with it, but you could at least comment on that possibility
and/or mention it with the ones you listed above.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/