Re: [BUG] 4.4.262: infinite loop in futex_unlock_pi (EAGAIN loop)

From: Greg Kroah-Hartman
Date: Thu Jul 22 2021 - 10:11:46 EST


On Mon, Jul 19, 2021 at 12:24:18PM -0400, Joe Korty wrote:
> [BUG] 4.4.262: infinite loop in futex_unlock_pi (EAGAIN loop)
>
> [ replicator, attached ]
> [ workaround patch that crudely clears the loop, attached ]
> [ 4.4.256 does _not_ have this problem, 4.4.262 is known to have it ]
>
> When a certain, secure-site application is run on 4.4.262, it locks up and
> is unkillable. Crash(8) and sysrq backtraces show that the application
> is looping in the kernel in futex_unlock_pi.
>
> Between 4.4.256 and .257, 4.4 got this 4.12 patch backported into it:
>
> 73d786b ("[PATCH] futex: Rework inconsistent rt_mutex/futex_q state")
>
> This patch has the following comment:
>
> The only problem is that this breaks RT timeliness guarantees. That
> is, consider the following scenario:
>
> T1 and T2 are both pinned to CPU0. prio(T2) > prio(T1)
>
> CPU0
>
> T1
> lock_pi()
> queue_me() <- Waiter is visible
>
> preemption
>
> T2
> unlock_pi()
> loops with -EAGAIN forever
>
> Which is undesirable for PI primitives. Future patches will rectify
> this.
>
> This describes the situation exactly. To prove, we developed a little
> kernel patch that, on loop detection, puts a message into the kernel log for
> just the first occurrence, keeps a count of the number of occurrences seen
> since boot, and tries to break out of the loop via usleep_range(1000,1000).
> Note that the patch is not really needed for replication. It merely shows,
> by 'fixing' the problem, that it really is the EAGAIN loop that triggers
> the lockup.
>
> Along with this patch, we submit a replicator. Running this replicator
> with this patch, it can be seen that 4.4.256 does not have the problem.
> 4.4.267 and the latest 4.4, 4.4.275, do. In addition, 4.9.274 (tested
> w/o the patch) does not have the problem.
>
> >From this pattern there may be some futex fixup patch that was ported
> back into 4.9 but failed to make it to 4.4.

Odd, I can't seem to find anything that we missed. Can you dig to see
if there is something that we need to do here so we can resolve this?

thanks,

greg k-h