Re: write() on pipe blocking due to in-page fragmentation?
From: Chris Friesen
Date: Fri Sep 23 2011 - 18:04:02 EST
On 09/23/2011 01:42 PM, Ricardo Nabinger Sanchez wrote:
Hello,
The simple program attached allocates a pipe, perform a number of
writes in it in order to fill the pipe, and then reads that data to
empty the pipe. The argument is used to determine how much data to
write per write iteration.
Values that are power of 2 up to PIPE_BUF work without any issues.
Other values may cause the write() call to block.
Intuitively, it seems that pages in the pipe are getting fragmented,
and eventually it will reach the limit of 16 pages and, if the data is
not consumed, will cause writers to block --- even though the data
would fit nicely otherwise.
I suggest reading "man 7 pipe" carefully, looking at the pipe capacity
and pipe_buf sections.
I suspect that what you're seeing is that due to the atomicity
requirements the kernel will not spread a single write over multiple
pages, so that when writing 3 bytes at a time each page in the queue has
a byte of free space.
Thus, you succeed in writing up to byte 65520 (out of 65536) but
anything after that blocks. Note that 65536-65520=16.
Is this understanding correct? If so, is it something that should be
fixed in the Linux kernel?
Or should the application ensure that data written to the pipe will be
done carefully as to not block a writer?
From the man page:
"Applications should not rely on a particular capacity: an application
should be designed so that a reading process consumes data as soon as it
is available, so that a writing process does not remain blocked."
Chris
--
Chris Friesen
Software Developer
GENBAND
chris.friesen@xxxxxxxxxxx
www.genband.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/