On Wed, Jan 6, 2016 at 10:42 AM, Stas Sergeev <stsp@xxxxxxx> wrote:... to not re-use it occasionally.
06.01.2016 21:05, Andy Lutomirski ÐÐÑÐÑ:It's probably safe in most cases, but the current behavior explicitly
On Wed, Jan 6, 2016 at 7:45 AM, Stas Sergeev <stsp@xxxxxxx> wrote:Could you please clarify?
Hello.The EPERM thing is probably also to preserve the behavior that nested
swapcontext() can be used with signal handlers,
it swaps the signal masks together with the other
parts of the context.
Unfortunately, linux implements the sigaltstack()
in a way that makes it impossible to use with
swapcontext().
Per the man page, sigaltstack is allowed to return
EPERM if the process is altering its sigaltstack while
running on sigaltstack. This is likely needed to
consistently return oss->ss_flags, that indicates
whether the process is being on sigaltstack or not.
Unfortunately, linux takes that permission to return
EPERM too literally: it returns EPERM even if you
don't want to change to another sigaltstack, but
only want to disable sigaltstack with SS_DISABLE.
To my reading of a man page, this is not a desired
behaviour. Moreover, you can't use swapcontext()
without disabling sigaltstack first, or the stack will
be re-used and overwritten by a subsequent signal.
SA_ONSTACK signals are supposed to work.
If I set up another stack inside the sighandler, the
nested SA_ONSTACK signal can just use that new stack,
which seems safe and sane. So I don't think EPERM helps
the nested signals, or could you explain the possible breakage
scenario?
checks whether you're on the alt stack during signal delivery,
Who knows?It should returns SS_DISABLE, right?But we have that (IMO quite silly) requirement that theThe work-around from this, is not even trivial: I haveHuh? I'm not sure I understand what you're talking about. It seems
to use the shm tricks to duplicate the sigaltstack in
the VA space, and move the stack pointer to another
mirror before calling sigaltstack. Then I use longjmp()
to restore the stack pointer. Then I can finally use
swapcontext(). This is an unpleasant work-around.
The fix on a kernel side looks simple: kernel should
just use ss_flags to determine whether the sigaltstack
is active. I can make a patch for that, but the problem
is that the arch-specific code is not using any helper
function to check for sigaltstack; instead it just uses
"if (ss_size)" checks.
reasonable to have the invariant that ss_size != 0 if and only if an
alt stack is enabled, and do_sigaltstack seems to enforce that
invariant.
returned oss->ss_flags is consistent.
So if inside the signal handler I use SS_DISABLE and
the kernel translates this into "ss_size = 0", the next
call to sigaltstack() will return 0 in oss->ss_flags.
And it won't set SS_ONSTACKExactly.
because you're not in the alt stack because there is no alt stack.
Of course, there *was* an alt stack when the signal was delivered, and
you're on that stack.
Hmm, OKey. But this can potentially contradict the man page,I would send a patch to remove the check or a patch to add a newBut if its that easy and we do not even need a consistentSo the patch will need to updateJust change do_sigaltstack?
all arches... I wonder if maybe someone can fix that
problem and update the arch-specific code. If not,
I'll probably need to update only the x86-specific code
and add an arch-specific define, which is a bit nasty.
oss->ss_flags - why not to remove the EPERM check entirely,
rather than only for SS_DISABLE? Note that if it is removed
only for SS_DISABLE and yet SS_DISABLE is translated to
"ss_size=0", then by the next sigaltstack() call you can do
whatever you want: the EPERM check will be entirely bypassed.
So if you are fine with even this, I can send the patch to
completely remove the check. Much easier for me. :)
I think the semantic of oss->ss_size is quite bad, but it is
already documented, so I am not sure.
SS_FORCE that disables the check. It should be just a couple of lines
of code. A selftests patch along with it would help. Cc linux-abi on
all of it.
BTW, the sigcontext SS stuff is queued for -next. I doubt it'll makeThanks for taking care of that!
4.5 since I think that all the relevant maintainers are just
recovering from vacations, and I already have a decent backlog of
stuff that hasn't landed in -tip yet.