I cant give you a reference to a paper about this (perhaps anybody can?) but
it is quite obviously that if you have a distributed environment where
shared data is changed, you always have to trade between performance and
coherency. To avoid locking all over the place with dramatically performance
decrease (ask server for permission before you use a cached page) one can do
optimistic concurrency control with transaction logs which needs to be
sorted at the server. This will automatically generate data inconsistency
which the server needs to sort out. In typical Unix Environments this isnt a
big problem, since locking has to be done on a higher level anyway.
Oh, btw, I just found a very good Paper about:
Methods and Models for Management of
Distributed and Persistent Data
Michael J. Feeley
January 20, 1995
http://www.cs.washington.edu/homes/feeley/generals/DataManagement/DataManagement.html
There are differnet methods for solving the various problems of distributed
computing including Commit protocolls, Concurrency Control, maintaining the
consistency of replicated data.
Greetings
Bernd
-- (OO) -- Bernd_Eckenfels@Wittumstrasse13.76646Bruchsal.de -- ( .. ) ecki@lina.{inka.de,ka.sub.org} http://home.pages.de/~eckes/ o--o *plush* 2048/93600EFD eckes@irc +4972573817 *plush* (O____O) If privacy is outlawed only Outlaws have privacy