Re: msync() behaviour broken for MS_ASYNC, revert patch?

From: Trond Myklebust
Date: Fri Feb 10 2006 - 18:01:18 EST

On Fri, 2006-02-10 at 14:46 -0800, Linus Torvalds wrote:
> On Fri, 10 Feb 2006, Trond Myklebust wrote:
> >
> > The Single Unix Spec appears to have a very different interpretation.
> Hmm. Very different wording, but same meaning, I think.
> > When MS_INVALIDATE is specified, msync() shall invalidate all
> > cached copies of mapped data that are inconsistent with the
> > permanent storage locations such that subsequent references
> > shall obtain data that was consistent with the permanent storage
> > locations sometime between the call to msync() and the first
> > subsequent memory reference to the data.
> Again, this says that the _mapping_ is invalidated, and should match
> persistent storage.
> Any dirty bits in the mapping (ie anything that hasn't been msync'ed)
> should be made persistent with permanent storage. Again, that is entirely
> consistent with just throwing the mmap'ed page away (dirty state and all)
> in a non-coherent environment.
> I don't think we really have any modern Unixes with non-coherent mmap's
> (although HP-UX used to be that way for a _loong_ time). But in the
> timeframe that was written, it was probably still an issue.
> Now, in a _coherent_ environment (like Linux) it should probably be a
> no-op, since the mapping is always consistent with storage (where
> "storage" doesn't actyally mean "disk", but the virtual file underneath
> the mapping).

Hmmm.... When talking about syncing to _permanent_ storage one usually
is talking about what is actually on the disk. In any case, we do have
non-coherent mmapped environments in Linux (need I mention NFS,
CIFS, ... ;-)?).

IIRC msync(MS_INVALIDATE) on Solaris was actually often used by some
applications to resync the client page cache to the server when using
odd locking schemes, so I believe this interpretation is a valid one.


To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at