Re: [PATCH v2 3/3] printk/nmi: Prevent deadlock when accessing the main log buffer in NMI

From: Petr Mladek
Date: Thu Jun 28 2018 - 15:04:02 EST


On Thu 2018-06-28 11:25:07, Sergey Senozhatsky wrote:
> On (06/27/18 16:20), Petr Mladek wrote:
> > +/*
> > + * Marks a code that might produce many messages in NMI context
> > + * and the risk of losing them is more critical than eventual
> > + * reordering.
> > + *
> > + * It has effect only when called in NMI context. Then printk()
> > + * will try to store the messages into the main logbuf directly
> > + * and use the per-CPU buffers only as a fallback when the lock
> > + * is not available.
> > + */
> > +void printk_nmi_direct_enter(void)
> > +{
> > + if (this_cpu_read(printk_context) & PRINTK_NMI_CONTEXT_MASK)
> > + this_cpu_or(printk_context, PRINTK_NMI_DIRECT_CONTEXT_MASK);
> > +}
>
> A side note: This nesting also handles recursive printk-s for us.
>
> NMI:
> printk_nmi_enter
> ftrace_dump
> printk_nmi_direct_enter
> vprintk_func
> spin_lock(logbuf_lock)
> vprintk_store
> vsprintf
> WARN_ON
> vprintk_func
> vprintk_nmi

Uff, it seems that the current design is "good" at lest from some
points of view.

> > __printf(1, 0) int vprintk_func(const char *fmt, va_list args)
> > {
> > + /*
> > + * Try to use the main logbuf even in NMI. But avoid calling console
> > + * drivers that might have their own locks.
> > + */
> > + if ((this_cpu_read(printk_context) & PRINTK_NMI_DIRECT_CONTEXT_MASK) &&
> > + raw_spin_trylock(&logbuf_lock)) {
> > + int len;
> > +
> > + len = vprintk_store(0, LOGLEVEL_DEFAULT, NULL, 0, fmt, args);
> > + raw_spin_unlock(&logbuf_lock);
> > + defer_console();
> > + return len;
> > + }
>
> So, maybe, something a bit better than defer_console().

I am not super happy with the name either. But wakeup_console(),
schedule_console(), or queue_console() looked confusing.

Also I thought about poke_console() but I guess that I already suggested
this name in the past and people did not like it.

Feel free to suggest anything.


> Otherwise,
> Acked-by: Sergey Senozhatsky <sergey.senozhatsky@xxxxxxxxx>

Anyway, thanks a lot for review.

Best Regards,
Petr