Re: NFS locking bug -- limited mtime resolution means nfs_lock() does not provide coherency guarantee

From: Trond Myklebust (trond.myklebust@fys.uio.no)
Date: Fri Sep 15 2000 - 12:29:24 EST


>>>>> " " == James Yarbrough <jmy@engr.sgi.com> writes:

> What is done for bypassing the cache when the size of a file
> lock held by the reading/writing process is not a multiple of
> the caching granularity? Consider two different clients with
> processes sharing a file and locking 2k byte regions of the
> file and possibly updating these regions. Suppose that each
> system caches 4k byte blocks. If system A locks the first 2k
> of a block and system B locks the second 2k, the updates from
> one of the systems may be lost if these systems cache the
> writes. This is because each system will write back the 4k
> block it cached, not the 2k block that was locked.

Under Linux writebacks have byte-sized granularity. If a page has been
partially dirtied, we save that information, and only write back the
dirty areas. As long as each system has restricted its updates to
within the 2k block that it has locked, there should be no conflict.

If however one system has been writing over the full 4k block, then
there will indeed be a race.

Cheers,
  Trond
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Fri Sep 15 2000 - 21:00:26 EST