Re: PROBLEM: Data corruption when pasting large data to terminal

From: Egmont Koblinger
Date: Sun Feb 19 2012 - 15:56:14 EST


Hi Bruno,

Unfortunately the lost tail is a different thing: the terminal is in
cooked mode by default, so the kernel intentionally keeps the data in
its buffer until it sees a complete line. A quick-and-dirty way of
changing to byte-based transmission (I'm lazy to look up the actual
system calls, apologies for the terribly ugly way of doing this) is:
pty = open(ptsdname, O_RDWR):
if (pty == -1) { ... }
+ char cmd[100];
+ sprintf(cmd, "stty raw <>%s", ptsdname);
+ system(cmd);
ptmx_slave_test(pty, line, rsz);

Anyway, thanks very much for your test program, I'll try to modify it
to trigger the data corruption bug.


egmont

On Fri, Feb 17, 2012 at 22:57, Bruno PrÃmont <bonbons@xxxxxxxxxxxxxxxxx> wrote:
> Hi,
>
> On Fri, 17 February 2012 Pavel Machek <pavel@xxxxxx> wrote:
>> > > Sorry, I didn't emphasize the point that makes me suspect it's a kernel issue:
>> > >
>> > > - strace reveals that the terminal emulator writes the correct data
>> > > into /dev/ptmx, and the kernel reports no short writes(!), all the
>> > > write(..., ..., 68) calls actually return 68 (the length of the
>> > > example file's lines incl. newline; I'm naively assuming I can trust
>> > > strace here.)
>> > > - strace reveals that the receiving application (bash) doesn't receive
>> > > all the data from /dev/pts/N.
>> > > - so: the data gets lost after writing to /dev/ptmx, but before
>> > > reading it out from /dev/pts/N.
>> >
>> > Which it will, if the reader doesn't read fast enough, right? ÂIs the
>> > data somewhere guaranteed to never "overrun" the buffer? ÂIf so, how do
>> > we handle not just running out of memory?
>>
>> Start blocking the writer?
>
> I did quickly write a small test program (attached). It forks a reader child
> and sends data over to it, at the end both write down their copy of the buffer
> to a /tmp/ptmx_{in,out}.txt file for manual comparing results (in addition
> to basic output of mismatch start line)
>
> From the time it took the writer to write larger buffers (as seen using strace)
> it seems there *is* some kind of blocking, but it's not blocking long enough
> or unblocking too early if the reader does not keep up.
>
>
> For quick and dirty testing of effects of buffer sizes, tune "rsz", "wsz"
> and "line" in main() as well as total size with BUFF_SZ define.
>
>
> The effects for me are that writer writes all data but reader never sees tail
> of written data (how much is being seen seems variable, probably matter of
> scheduling, frequency scaling and similar racing factors).
>
> My test system is single-core uniprocessor centrino laptop (32bit x86) with
> 3.2.5 kernel.
>
> Bruno
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/