Re: [RESEND] [PATCH] VFS: make file->f_pos access atomic on 32bitarch
From: Eric Dumazet
Date: Wed Oct 08 2008 - 04:36:18 EST
Peter Zijlstra a écrit :
On Wed, 2008-10-08 at 13:48 +0900, Hisashi Hifumi wrote:
Simultaneous access by two or more writer can corrupt file content,
so this case needs some locks(flock or fcntl) to preserve synchronization
of file content. This is responsibility of user-space application.
But file->f_pos race issue can occur even if multiple threads just read
simultaneously. I think this is not responsibility of user-space application.
To avoid this currently, an application needs some locks to protect file offset
even if it just read a file. So I think f_pos race should be fixed.
Just to add to all those who already said you're wrong :-)
You're wrong, if two threads would like to read the same file they
either dup() the fd or open() the file twice. There is absolutely no
valid reason to have two threads read from the same fd without
synchronising their access to it - never.
About dup() syscall, it wont help, since old and new descriptor points to
the same "struct file", definitly sharing file position, since first Unixes.
To quote the fine manual :
After successful return of dup or dup2, the old and new descriptors may
be used interchangeably. They share locks, file position pointers and
flags; for example, if the file position is modified by using lseek on
one of the descriptors, the position is also changed for the other.
pread()/pwrite() are used my multi-threaded applications that want to share
a single "struct file". Or they must use some form of synchronization around
regular read()/write()/lseek() calls.
There is no generic f_pos race, only buggy applications.
A far more interesting problem is the "tail -f logfile" problem that raised
recently in lkml, when file is NFS mounted, where reader can get nul bytes...
(Subject : blocks of zeros (NULLs) in NFS files in kernels >= 2.6.20 )
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/