Re: [PATCHv7 3/8] printk: introduce per-cpu safe_print seq buffer
From: Steven Rostedt
Date: Wed Feb 01 2017 - 10:52:29 EST
On Tue, 27 Dec 2016 23:16:06 +0900
Sergey Senozhatsky <sergey.senozhatsky@xxxxxxxxx> wrote:
> This patch extends the idea of NMI per-cpu buffers to regions
> that may cause recursive printk() calls and possible deadlocks.
> Namely, printk() can't handle printk calls from schedule code
> or printk() calls from lock debugging code (spin_dump() for instance);
> because those may be called with `sem->lock' already taken or any
> other `critical' locks (p->pi_lock, etc.). An example of deadlock
> can be
>
> vprintk_emit()
> console_unlock()
> up() << raw_spin_lock_irqsave(&sem->lock, flags);
> wake_up_process()
> try_to_wake_up()
> ttwu_queue()
> ttwu_activate()
> activate_task()
> enqueue_task()
> enqueue_task_fair()
> cfs_rq_of()
> task_of()
> WARN_ON_ONCE(!entity_is_task(se))
> vprintk_emit()
> console_trylock()
> down_trylock()
> raw_spin_lock_irqsave(&sem->lock, flags)
> ^^^^ deadlock
>
> and some other cases.
>
> Just like in NMI implementation, the solution uses a per-cpu
> `printk_func' pointer to 'redirect' printk() calls to a 'safe'
> callback, that store messages in a per-cpu buffer and flushes
> them back to logbuf buffer later.
>
> Usage example:
>
> printk()
> printk_safe_enter_irqsave(flags)
> //
> // any printk() call from here will endup in vprintk_safe(),
> // that stores messages in a special per-CPU buffer.
> //
> printk_safe_exit_irqrestore(flags)
>
> The 'redirection' mechanism, though, has been reworked, as suggested
> by Petr Mladek. Instead of using a per-cpu @print_func callback we now
> keep a per-cpu printk-context variable and call either default or nmi
> vprintk function depending on its value. printk_nmi_entrer/exit and
> printk_safe_enter/exit, thus, just set/celar corresponding bits in
> printk-context functions.
>
> The patch only adds printk_safe support, we don't use it yet.
>
> Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@xxxxxxxxx>
/me does the actual reviewed-by now :-p
Yes, I like this approach. I probably would have done it pretty much
the same way.
Reviewed-by: Steven Rostedt (VMware) <rostedt@xxxxxxxxxxx>
-- Steve