Re: [PATCH 2/3] panic: Add option to dump all CPUs backtraces in panic_print

From: Petr Mladek
Date: Thu Jan 13 2022 - 04:31:45 EST


On Tue 2021-11-09 17:28:47, Guilherme G. Piccoli wrote:
> Currently the "panic_print" parameter/sysctl allows some interesting debug
> information to be printed during a panic event. This is useful for example
> in cases the user cannot kdump due to resource limits, or if the user
> collects panic logs in a serial output (or pstore) and prefers a fast
> reboot instead of a kdump.

Yes, I have missed this possibility many times.

> Happens that currently there's no way to see all CPUs backtraces in
> a panic using "panic_print" on architectures that support that. We do
> have "oops_all_cpu_backtrace" sysctl, but although partially overlapping
> in the functionality, they are orthogonal in nature: "panic_print" is
> a panic tuning (and we have panics without oopses, like direct calls to
> panic() or maybe other paths that don't go through oops_enter()
> function), and the original purpose of "oops_all_cpu_backtrace" is to
> provide more information on oopses for cases in which the users desire
> to continue running the kernel even after an oops, i.e., used in
> non-panic scenarios.

panic() already prevents double backtrace of the CPU that Oopsed, see:

#ifdef CONFIG_DEBUG_BUGVERBOSE
/*
* Avoid nested stack-dumping if a panic occurs during oops processing
*/
if (!test_taint(TAINT_DIE) && oops_in_progress <= 1)
dump_stack();
#endif

It should be possible to do something similar also for backtraces
on all CPUs.

There are more situation when the backtraces are printed and panic()
is called, for example: softlockup_panic and
softlockup_all_cpu_backtrace.

Well, it is just nice to have. People probably will not use these
options together. And it is better to have the backtraces twice
than do not have them at all.

> So, we hereby introduce an additional bit for "panic_print" to allow
> dumping the CPUs backtraces during a panic event.
>
> Signed-off-by: Guilherme G. Piccoli <gpiccoli@xxxxxxxxxx>

Feel free to use:

Reviewed-by: Petr Mladek <pmladek@xxxxxxxx>

Best Regards,
Petr