Re: perf: fuzzer BUG: KASAN: stack-out-of-bounds in __unwind_start
From: Peter Zijlstra
Date: Wed Nov 30 2016 - 06:06:43 EST
On Wed, Nov 30, 2016 at 11:01:29AM +0100, Petr Mladek wrote:
> On Tue 2016-11-29 18:10:38, Peter Zijlstra wrote:
> > In any case, as long as printk has a globally serialized 'log', it, per
> > design, will be worse than the console drivers its build upon. And them
> > being shit precludes the entire stack from being useful.
>
> I probably still do not understand all the problems with console
> drivers. My understanding is that the problem is that they have
> its own locking and are slow. It means that they are prone to
> a deadlock and they might block for a long time.
Slow isn't a problem; just limit the crap you want to push down them.
Them taking locks, them using the scheduler and them depending on entire
subsystem state to be 'sane' are the problems.
Take for instance the usb-serial console driver (everybody agrees its
crap, and gregkh did it as a lark, but still it exists), that takes
locks, relies on the scheduler, depends on the USB subsystem, which in
turn depends on the PCI subsystem.
Now imagine trying to use that for something halfway sensible.
Even the DRM based consoles suffer much the same problems.
Heck, even the 'normal' UART drivers do this :-(
> In compare, the serialized log buffer has one lock and writing
> is fast. It means that it suffers "only" from the deadlocks.
> And we try to address the deadlocks by using the temporary
> per-CPU buffers in critical situations (NMI, locked sections).
The temporary buffers are crap when you never get around to flushing
them. You need a fully lockless and wait free buffer or you're screwed.
> Of course, it is useless if you have the messages in a buffer
> and can't reach them. But we do the best effort to push them
> to consoles and crash dump. Also it might be very useful to
> have the log buffer on persistent memory.
Nothing will crash if you do while (1); in NMI context.
Been there, done that (of course it wasn't _that_ blatant, but the
effect was much the same).
I also have WARN()s in scheduler code, now most of those will not
indicate fatal conditions, but given the above state of console drivers
they have a very real chance of deadlocking the system.
And no, we're not going to do WARN_DEFERRED and similar crap. That just
proliferates the utter fail of printk() down the stack and creates more
mess.
> > It mostly works, most of the time, and that seems to be what Linus
> > wants, since its really the best we can have given the constraints. But
> > for debugging, when you have a UART, it totally blows.
>
> I believe that the early console is the last resort for debugging
> some type of bugs. But many other bugs can be debugged with the
> classic printk(). And there are (production) systems where you
> cannot (easily) or do not want to use early printk all the time.
>
> Another question is the complexity of the printk() code. Especially,
> the big effort to get "perfect" (non-mixed) output is questionable.
I'm saying its an entirely wasted effort, because no matter how much
complexity you pile on, you can never get it into a useful state because
its all build on shit.
So yes, I much prefer to not add more and more complexity into printk.
We should strip it down, not pile more junk on.