Re: [CHECKER] inconsistent NFS stat cache (NFS on ext3, 2.6.11)
From: Trond Myklebust
Date: Sun Mar 13 2005 - 19:52:09 EST
su den 13.03.2005 Klokka 19:35 (-0500) skreiv Daniel Jacobowitz:
> On Sun, Mar 13, 2005 at 03:42:29PM -0500, Trond Myklebust wrote:
> > su den 13.03.2005 Klokka 15:04 (-0500) skreiv Daniel Jacobowitz:
> > > I can't find any documentation about this, but it seems like the same
> > > problem that has been causing me headaches lately; when I replace glibc
> > > from the server side of an nfsroot, the client has a couple of
> > > variously wrong reads before it sees the new files. If it breaks NFS
> > > so badly, why is it the default for the Linux NFS server?
> > No, that's a very different issue: you are violating the NFS cache
> > consistency rules if you are changing a file that is being held open by
> > other machines.
> > The correct way to do the above is to use GNU install with the '-b'
> > option: that will rename the version of glibc that is in use, and then
> > install the new glibc in a different inode.
> [closed and/or irrelevant lists removed from CC:]
> No, the copy of glibc in question is not in use at the time. The next
> attempt to open it on the client will sometimes generate a "stale NFS
> handle" message, or if the open succeeds a read will sometimes return
> EIO. But it sounds like this is a different problem than the original
> poster was testing for.
Sorry, but you should _never_ have gotten an ESTALE error if the file
was not in use when you deleted the old copy of glibc. A fresh call to
open() will always result in a new lookup of the filehandle.
What may have happened in the case of the EIO error is that you may have
raced: i.e. a client starts reading the file while it is being copied
You'll rather want to ask Neil Brown about why subtree_check is still
the default for knfsd. He is the NFS server maintainer.
Trond Myklebust <trond.myklebust@xxxxxxxxxx>
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/