Re: How important is it that tty_write_room doesn't lie?

From: Theodore Tso
Date: Thu Feb 24 2011 - 07:39:11 EST

On Feb 23, 2011, at 5:57 PM, Ted Ts'o wrote:

>> The FIFO can vary, but it's probably at least 2KB it size. At
>> least, we hope to able to set it to that size in the field.
>> Currently, we set it to 4KB.
> Wow, the FIFO has gotten a lot larger than I ever remember them
> getting even when people were doing 460kbps. I'm guessing this is
> because you're trying to defer interrupts for power saving reasons,
> yes? I was used to seeing FIFO sizes more in the 32-128 bytes, tops. :-)

One more thought: If your FIFO is that large, you might want to consider
simply having the interrupt driver wake up a kernel thread, say when it
half full (to give the scheduler time to wake up and let the kernel thread
run), and dequeue the FIFO in a process context.

Or as an intermediate step, do the FIFO dequeue in a bottom-half
handler, where you'll at least be able to do memory allocations.

Don't assume that you have to empty the entire FIFO into a circular
buffer in an interrupt handler, just because that's what the 8250/16550
driver does. It was designed that way because it had very small FIFO's,
but there are other ways that serial drivers can work.

One device driver I worked on, for the Comtrol Rocketport, was designed
for a very large number of ports (32-128 serial ports) and it had a single
huge buffer shared between all of the ports, and once an interrupt
kicked things off, it was more efficient to simply be constantly pulling
things out of the buffer and dispatching it to the various tty ports,
and in practice it was faster and more efficient never to pull
the characters out in interrupt context, but instead use a polling
type system instead.

Think of it as a very early form of NAPI interrupt mitigation, but for
serial ports instead of high speed ethernet. :-)

-- Ted

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at