Re: 2.6.18-rc3-git3 - XFS - BUG: unable to handle kernel NULL pointer dereference at virtual address 00000078

From: Nathan Scott
Date: Tue Aug 15 2006 - 05:01:37 EST

On Mon, Aug 14, 2006 at 09:49:10AM +0200, Jesper Juhl wrote:
> On 14/08/06, Nathan Scott <nathans@xxxxxxx> wrote:
> > > LEAFN node level is 1 inode 412035424 bno = 8388608
> >
> > Ooh. Can you describe this test case you're using?
> ...
> These filesystems vary in size from 50G to 3.5T
> The XFS filesystems contain rsync copies of filesystems on as many servers.
> The workload that triggers the problem is when all the servers start
> updating their copy via rsync - Then within a few hours the problem
> triggers.
> So to recreate the same scenario you'll want 28 servers doing rsync of
> filesystems of various sizes between 50G & 3.5T to a central server
> running 2.6.18-rc4 with 28 XFS filesystems.
> ...
> There are millions of files. The data the server recievs is copies of
> websites. Each server that sends data to the server with the 28 XFS
> filesystems hosts between 1800 and 2600 websites, so there are lots of
> files and every concievable strange filename.

Wow, a special kind of hell for a filesystem developer...! ;-)

Its not clear to me where the rename operation happens in all of
this - does rsync create a local, temporary copy of the file and
then rename it? Is it that central server going down or one of
those 28 other server machines? (I assume it is, I can't see an
opportunity for renaming out there...).

When you hit it again, could you grab the contents of the inode
(you'll get that from xfs_repair -n, e.g. 412035424 above) with
xfs_db (see last entry in the XFS FAQ which describes how to do
that), then mail that to me please? If you can get the source
and target names in the rename that'll help alot too... I can
explain how to use KDB to get that, but maybe you have another
debugger handy already?


To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at
Please read the FAQ at