> So I would assume the author's claim means that this
>locking technique `works better over NFS on the systems I tested it on.'
Well, since (as far as I know) I'm the one that first invented that scheme
I can indeed vouch for that. The difference between the actual scheme
used by exim (as described above) and the scheme used by me is that the
exitcode of the link() call is being used to avoid relying on the
stat() call in the most common cases.
However...
>There is nothing in the NFS spec that requires the client to update its
>cached link count[*].
I think you're wrong here. NFS, if used from a UNIX host is supposed to
map UNIX filesystem semantics onto the NFS filesystem and back.
If, after a link() call, the attribute would still display only one hardlink
due to caching, then your caching mechanism clearly is broken.
The very least one could expect would be that the hardlink count would
be increased in the cached copy. This should, however, only be done
*if* the hardlink returned success. If the hardlink returns failure,
the attribute cache *must* be flushed (a consistent view of the
filesystem cannot be guaranteed otherwise).
>[*] The NFS spec is deliberately vague about how and to which extent
>the client caches information. Vague being an overstatement here.
>For those interested, the bare-bones protocol spec is available as
>RFC 1094. There's also a (fairly expensive) specification available
The actual NFS specs aren't even that important at this point. Simply
the fact that you're trying to present UNIX filesystem semantics already
dictates when the cache needs to be updated or flushed.
-- Sincerely, srb@cuci.nl Stephen R. van den Berg (AKA BuGless).Time is nature's way of making sure everything doesn't happen at once.