In 2.0 NFS writes are synchronous so the issue doesnt arise. The current
situation with 2.1.x is this
o NFSv2 writes are done partially asynchronously, and the write
doesnt appear to wait for all the pages to be written before
returning as it did in 2.0.
It should either do synchronous write ends or support EWFLUSH for
NFSv2. Only NFSv3 has real asynchronous NFS writes because it has
a 2 phase commit.
o Mount operations go via the hacked rpc calls. This means a single
attempt to tcp mount a non responding host prevents you from mounting
another file system for about 30 minutes. Ditto lockd. If it connects
and the other end doesnt reply you are doomed. Hit the button fsck
all systems. Hence the rpc_execute() hack has to go.
Apart from that
o TCP write callbacks appear to be broken (hunting that)
o Some people see data corruption and other errors
o There is a trivial uname race thats not terribly important
o We handle file credentials wrongly in the client. We pass the
fsuid of the process not the fsuid used by the process to create
the handle - thats what causes all the horrible setuid mess (ditto 2.0)
o We appear to dispatch page sized writes. We have to dispatch 8K
writes to an 8K page sized BSD NFS server otherwise performance
will be complete and utter shite. 2.0 gets this right although the
synchronous writes slow it down.
I can't see how to fix that last two items cleanly with the page cache right
now. I'm sure it is possible to do cleanly.
As to the write() behaviour on disk full. NFS is not and never has been a
vaguely posix compliant file system, its a strange hack that happens to
be suprisingly portable and did the job. Its one of those dreadful 80%
is good enough solutions.
Alan
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.altern.org/andrebalsa/doc/lkml-faq.html