On Sat, 01 Feb 2014 10:09:03 -0500
Peter Hurley <peter@xxxxxxxxxxxxxxxxxx> wrote:
On 01/14/2014 11:24 AM, Pavel Roskin wrote:Hi Alan,
Quoting One Thousand Gnomes <gnomes@xxxxxxxxxxxxxxxxxxx>:
Maybe we should unset the low_latency flag as soon as DMA fails? There
are two flags, one is state->uart_port->flags and the other is
port->low_latency. I guess we need to unset both.
Well low latency and DMA are pretty much exclusive in the real world so
probably DMA ports shouldn't allow low_latency to be set at all in DMA
mode.
That's a useful insight. I assumed exactly the opposite.
The meaning of low_latency has migrated since 2.6.28
Not really. The meaning of low latency was always "get the turn around
time for command/response protocols down as low as possible". DMA driven
serial usually reports a transfer completion on a watermark or a timeout,
so tends to work very badly within the Linux definition of 'low latency'
for tty.
What it does has certainly changed but thats implementation detail.
Perhaps we should unconditionally unset low_latency (or remove it entirely).
Real low latency can be addressed by using the -RT kernel.
Just saying "use -RT" would be a regression and actually hurt quite a few
annoying "simple protocol" using tools for all sorts of control systems.
We are talking about milliseconds not microseconds here.
The expected behaviour in low_latency is probably best described as
data arrives
processed
wakeup
and to avoid the case of
data arrives
queued for back end
[up to 10mS delay, but typically 1-2mS]
processed
wakeup
which multipled over a 50,000 S record download is a lot of time
Everything else is not user visible so can be changed freely to get that
assumption to work (including ending up not needing it in the first
place).
Getting tty to the point everything but N_TTY canonical mode is a fast
path would probably eliminate the need nicely - I don't know of any use
cases that expect ICANON, ECHO or I*/O* processing for low latency.