Re: [tip: sched/urgent] sched/core: Avoid spurious lock dependencies
From: Peter Zijlstra
Date: Fri Nov 22 2019 - 15:21:14 EST
On Fri, Nov 22, 2019 at 09:01:22PM +0100, Sebastian Andrzej Siewior wrote:
> On 2019-11-13 10:06:28 [-0000], tip-bot2 for Peter Zijlstra wrote:
> > sched/core: Avoid spurious lock dependencies
> >
> > While seemingly harmless, __sched_fork() does hrtimer_init(), which,
> > when DEBUG_OBJETS, can end up doing allocations.
> >
> > This then results in the following lock order:
> >
> > rq->lock
> > zone->lock.rlock
> > batched_entropy_u64.lock
> >
> > Which in turn causes deadlocks when we do wakeups while holding that
> > batched_entropy lock -- as the random code does.
>
> Peter, can it _really_ cause deadlocks? My understanding was that the
> batched_entropy_u64.lock is a per-CPU lock and can _not_ cause a
> deadlock because it can be always acquired on multiple CPUs
> simultaneously (and it is never acquired cross-CPU).
> Lockdep is simply not smart enough to see that and complains about it
> like it would complain about a regular lock in this case.
That part yes. That is, even holding a per-cpu lock you can do a wakeup
to the local cpu and recurse back onto rq->lock.
However I don't think it can actually happen bceause this
is init_idle, and we only ever do that on hotplug, so actually creating
the concurrency required for the deadlock might be tricky.
Still, moving that thing out from under the lock was simple and correct.