Re: __local_bh_enable_ip() vs lockdep
From: Heiko Carstens
Date: Wed Dec 16 2020 - 13:38:33 EST
On Wed, Dec 16, 2020 at 06:52:59PM +0100, Peter Zijlstra wrote:
> On Tue, Dec 15, 2020 at 02:47:24PM -0500, Steven Rostedt wrote:
> > On Tue, 15 Dec 2020 20:01:52 +0100
> > Heiko Carstens <hca@xxxxxxxxxxxxx> wrote:
> >
> > > Hello,
> > >
> > > the ftrace stack tracer kernel selftest is able to trigger the warning
> > > below from time to time. This looks like there is an ordering problem
> > > in __local_bh_enable_ip():
> > > first there is a call to lockdep_softirqs_on() and afterwards
> > > preempt_count_sub() is ftraced before it was able to modify
> > > preempt_count:
> >
> > Don't run ftrace stack tracer when debugging lockdep. ;-)
> >
> > /me runs!
>
> Ha!, seriously though; that seems like something we've encountered
> before, but my google-fu is failing me.
>
> Do you remember what, if anything, was the problem with this?
Actually this looks like:
1a63dcd8765b ("softirq: Reorder trace_softirqs_on to prevent lockdep splat")
I can give it a test, but it looks quite obvious that your patch will
make the problem go away.
> diff --git a/kernel/softirq.c b/kernel/softirq.c
> index d5bfd5e661fc..9d71046ea247 100644
> --- a/kernel/softirq.c
> +++ b/kernel/softirq.c
> @@ -186,7 +186,7 @@ void __local_bh_enable_ip(unsigned long ip, unsigned int cnt)
> * Keep preemption disabled until we are done with
> * softirq processing:
> */
> - preempt_count_sub(cnt - 1);
> + __preempt_count_sub(cnt - 1);
>
> if (unlikely(!in_interrupt() && local_softirq_pending())) {
> /*
>