Re: [RFC] coredump: Do not interrupt dump for TIF_NOTIFY_SIGNAL
From: Pavel Begunkov
Date: Tue Dec 28 2021 - 06:26:36 EST
On 12/24/21 19:52, Eric W. Biederman wrote:
Pavel Begunkov <asml.silence@xxxxxxxxx> writes:
[...]
FWIW, I worked it around in io_uring back then by breaking the
dependency.
I am in the middle of untangling the dependencies between ptrace,
coredump, signal handling and maybe a few related things.
Sounds great
Do folks have a reproducer I can look at? Pavel especially if you have
something that reproduces on the current kernels.
A syz reproducer was triggering it reliably, I'd try to revert the
commit below and test:
https://syzkaller.appspot.com/text?tag=ReproC&x=15d3600cb00000
It should hung a task. Syzbot report for reference:
https://syzkaller.appspot.com/bug?extid=27d62ee6f256b186883e
commit 1d5f5ea7cb7d15b9fb1cc82673ebb054f02cd7d2
Author: Pavel Begunkov <asml.silence@xxxxxxxxx>
Date: Fri Oct 29 13:11:33 2021 +0100
io-wq: remove worker to owner tw dependency
INFO: task iou-wrk-6609:6612 blocked for more than 143 seconds.
Not tainted 5.15.0-rc5-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:iou-wrk-6609 state:D stack:27944 pid: 6612 ppid: 6526 flags:0x00004006
Call Trace:
context_switch kernel/sched/core.c:4940 [inline]
__schedule+0xb44/0x5960 kernel/sched/core.c:6287
schedule+0xd3/0x270 kernel/sched/core.c:6366
schedule_timeout+0x1db/0x2a0 kernel/time/timer.c:1857
do_wait_for_common kernel/sched/completion.c:85 [inline]
__wait_for_common kernel/sched/completion.c:106 [inline]
wait_for_common kernel/sched/completion.c:117 [inline]
wait_for_completion+0x176/0x280 kernel/sched/completion.c:138
io_worker_exit fs/io-wq.c:183 [inline]
io_wqe_worker+0x66d/0xc40 fs/io-wq.c:597
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
...
As part of that I am in the process of guaranteeing all of the coredump
work happens in get_signal so nothing of io_uring or any cleanup
anywhere else runs until the coredump completes.
I haven't quite posted the code for review because it's the holidays.
But I am aiming at v5.17 or possibly v5.18, as the code is just about
ready.
--
Pavel Begunkov