Re: [syzbot] [kernel?] INFO: task hung in restrict_one_thread_callback
From: Günther Noack
Date: Tue Feb 24 2026 - 05:01:05 EST
On Tue, Feb 24, 2026 at 08:10:30AM +0800, Hillf Danton wrote:
> On Fri, Feb 20, 2026 at 03:11:21AM -0800
> > Hello,
> >
> > syzbot found the following issue on:
> >
> > HEAD commit: 635c467cc14e Add linux-next specific files for 20260213
> > git tree: linux-next
> > console output: https://syzkaller.appspot.com/x/log.txt?x=1452f6e6580000
> > kernel config: https://syzkaller.appspot.com/x/.config?x=61690c38d1398936
> > dashboard link: https://syzkaller.appspot.com/bug?extid=7ea2f5e9dfd468201817
> > compiler: Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8
> > syz repro: https://syzkaller.appspot.com/x/repro.syz?x=16e41c02580000
> > C reproducer: https://syzkaller.appspot.com/x/repro.c?x=15813652580000
>
> #syz test
>
> --- x/security/landlock/tsync.c
> +++ y/security/landlock/tsync.c
> @@ -540,11 +540,8 @@ int landlock_restrict_sibling_threads(co
> * of for_each_thread(). We can reset it on each loop iteration because
> * all previous loop iterations are done with it already.
> *
> - * num_preparing is initialized to 1 so that the counter can not go to 0
> - * and mark the completion as done before all task works are registered.
> - * We decrement it at the end of the loop body.
> */
> - atomic_set(&shared_ctx.num_preparing, 1);
> + atomic_set(&shared_ctx.num_preparing, 0);
> reinit_completion(&shared_ctx.all_prepared);
>
> /*
> @@ -553,11 +550,7 @@ int landlock_restrict_sibling_threads(co
> */
> found_more_threads = schedule_task_work(&works, &shared_ctx);
>
> - /*
> - * Decrement num_preparing for current, to undo that we initialized it
> - * to 1 a few lines above.
> - */
> - if (atomic_dec_return(&shared_ctx.num_preparing) > 0) {
> + if (atomic_read(&shared_ctx.num_preparing) > 0) {
> if (wait_for_completion_interruptible(
> &shared_ctx.all_prepared)) {
> /* In case of interruption, we need to retry the system call. */
> --
Hello Hillf!
Thanks for your contribution;
We have already analyzed the bug on an adjacent mail thread and have a
tentative patch of which we know it'll fix the issue:
https://lore.kernel.org/all/20260224062729.2908692-1-dingyihan@xxxxxxxxxxxxx/
–Günther