Re: linux-next: stall warnings and deadlock on Arm64 (was: [PATCH] kfence: Avoid stalling...)
From: Paul E. McKenney
Date: Fri Nov 20 2020 - 12:38:27 EST
On Fri, Nov 20, 2020 at 03:22:00PM +0000, Mark Rutland wrote:
> On Fri, Nov 20, 2020 at 06:39:28AM -0800, Paul E. McKenney wrote:
> > On Fri, Nov 20, 2020 at 03:19:28PM +0100, Marco Elver wrote:
> > > I found that disabling ftrace for some of kernel/rcu (see below) solved
> > > the stalls (and any mention of deadlocks as a side-effect I assume),
> > > resulting in successful boot.
> > >
> > > Does that provide any additional clues? I tried to narrow it down to 1-2
> > > files, but that doesn't seem to work.
> >
> > There were similar issues during the x86/entry work. Are the ARM guys
> > doing arm64/entry work now?
>
> I'm currently looking at it. I had been trying to shift things to C for
> a while, and right now I'm trying to fix the lockdep state tracking,
> which is requiring untangling lockdep/rcu/tracing.
>
> The main issue I see remaining atm is that we don't save/restore the
> lockdep state over exceptions taken from kernel to kernel. That could
> result in lockdep thinking IRQs are disabled when they're actually
> enabled (because code in the nested context might do a save/restore
> while IRQs are disabled, then return to a context where IRQs are
> enabled), but AFAICT shouldn't result in the inverse in most cases since
> the non-NMI handlers all call lockdep_hardirqs_disabled().
>
> I'm at a loss to explaim the rcu vs ftrace bits, so if you have any
> pointers to the issuies ween with the x86 rework that'd be quite handy.
There were several over a number of months. I especially recall issues
with the direct-from-idle execution of smp_call_function*() handlers,
and also with some of the special cases in the entry code, for example,
reentering the kernel from the kernel. This latter could cause RCU to
not be watching when it should have been or vice versa.
I would of course be most aware of the issues that impinged on RCU
and that were located by rcutorture. This is actually not hard to run,
especially if the ARM bits in the scripting have managed to avoid bitrot.
The "modprobe rcutorture" approach has fewer dependencies. Either way:
https://paulmck.livejournal.com/57769.html and later posts.
Thanx, Paul
> Thanks,
> Mark.
>
> >
> > Thanx, Paul
> >
> > > Thanks,
> > > -- Marco
> > >
> > > ------ >8 ------
> > >
> > > diff --git a/kernel/rcu/Makefile b/kernel/rcu/Makefile
> > > index 0cfb009a99b9..678b4b094f94 100644
> > > --- a/kernel/rcu/Makefile
> > > +++ b/kernel/rcu/Makefile
> > > @@ -3,6 +3,13 @@
> > > # and is generally not a function of system call inputs.
> > > KCOV_INSTRUMENT := n
> > >
> > > +ifdef CONFIG_FUNCTION_TRACER
> > > +CFLAGS_REMOVE_update.o = $(CC_FLAGS_FTRACE)
> > > +CFLAGS_REMOVE_sync.o = $(CC_FLAGS_FTRACE)
> > > +CFLAGS_REMOVE_srcutree.o = $(CC_FLAGS_FTRACE)
> > > +CFLAGS_REMOVE_tree.o = $(CC_FLAGS_FTRACE)
> > > +endif
> > > +
> > > ifeq ($(CONFIG_KCSAN),y)
> > > KBUILD_CFLAGS += -g -fno-omit-frame-pointer
> > > endif