Re: [PATCH v2 0/6] atmel_serial: Cleanups, irq handler splitup & DMA

From: Remy Bohmer
Date: Wed Dec 19 2007 - 11:41:03 EST


Hello Haavard,

> Could you try the patch below? It's a bit strange that you got an oops
> though...

It is not really strange... spinlocks are mutexes on preempt-rt, and
recursive mutex locking is not allowed, this is one differences with
the mainline spinlock.

But... I tried that patch, and it works a lot better, no oopses
anymore, but I noticed that I sometimes get an input overrun (ttyS0: 1
input overrun(s) ) during stress conditions.
This is something I did not notice before, maybe it was already there,
or has something changed in this area that it is now more sensitive
for this?

Kind Regards,

Remy

2007/12/19, Haavard Skinnemoen <hskinnemoen@xxxxxxxxx>:
> On Wed, 19 Dec 2007 16:57:04 +0100
> "Remy Bohmer" <linux@xxxxxxxxxx> wrote:
>
> > Hello Haavard,
> >
> > Sorry.. But I get an Oops on Preempt-RT with the latest set of
> > patches. I did not see it earlier today with the other set of patches.
>
> Hmm...from the backtrace, it looks like lock recursion -- port->lock is
> held for the whole duration of the tasklet, but we somehow end up in
> uart_start(), which grabs the lock again.
>
> Could you try the patch below? It's a bit strange that you got an oops
> though...
>
> Haavard
>
> diff --git a/drivers/serial/atmel_serial.c b/drivers/serial/atmel_serial.c
> index 7967054..948c643 100644
> --- a/drivers/serial/atmel_serial.c
> +++ b/drivers/serial/atmel_serial.c
> @@ -666,7 +666,13 @@ static void atmel_rx_from_ring(struct uart_port *port)
> uart_insert_char(port, status, ATMEL_US_OVRE, c.ch, flg);
> }
>
> + /*
> + * Drop the lock here since it might end up calling
> + * uart_start(), which takes the lock.
> + */
> + spin_unlock(&port->lock);
> tty_flip_buffer_push(port->info->tty);
> + spin_lock(&port->lock);
> }
>
> static void atmel_rx_from_dma(struct uart_port *port)
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/