do_loop_readv_writev() not as described for drivers implementing only write()?

From: Mike McTernan
Date: Tue Mar 02 2010 - 08:10:37 EST


I'm using writev() with an old FPGA driver which only implements
write(), not aio_write(). I'm expecting the behaviour described in the
man page for writev():

"The data transfers performed by readv() and writev() are atomic: the
data written by writev() is written as a single block that is not
intermingled with output from writes in other processes (but see pipe(7)
for an exception); analogously, readv() is guaranteed to read a
contiguous block of data from the file, regardless of read operations
performed in other threads or processes that have file descriptors
referring to the same open file description (see open(2))."

I appear to be observing intermingling of individual iovec entries that
are being written to the same fd from different threads i.e. each call
to writev() isn't producing a contiguous block to be output. This is at
odds with the man page description.

Looking into the kernel sources (from around 2.6.28 to 2.6.33), the
driver doesn't implement aio_write(), so vfs_writev() gets handled by
do_loop_readv_writev() as a series of discrete calls to the driver's

I can't see where any locking is applied to ensure each iovec is handled
serially without 'internmingling', which would awkwardly have to be
outside the driver in this case.

Hunting around I found various good articles on writev() and the aio
stuff e.g.

But nowhere can I find whether it is expected behaviour that
writev/readv() for an driver which only implements write/readv() is
actually non-atomic. Lots of sources are stating the atomicity of these
calls though.

Have I overlooked some good docs or some locking hidden in the vfs

Aside I'm working to update the driver to provide aio_write() so it can
provide it's own locking such that the userspace.

Kind Regards,


To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at