On 11/08/06, Nathan Scott <nathans@xxxxxxx> wrote:Btw, I have left the machine running with the 2.6.18-rc4 kernel and it
> On Thu, Aug 10, 2006 at 01:31:35PM +0200, Jesper Juhl wrote:
> > On 08/08/06, Nathan Scott <nathans@xxxxxxx> wrote:
> > ...
> > Ok, I booted the server with 2.6.18-rc4 + your patch. Things went well
> > for ~3 hours and then blew up - not in the same way though.
> > The machine was under pretty heavy load recieving data via rsync when
> > the following happened :
> > Filesystem "dm-51": XFS internal error xfs_trans_cancel at line 1138
> > of file fs/xfs/xfs_trans.c. Caller 0xc0210e3f
> > [<c0103a3c>] show_trace_log_lvl+0x152/0x165
> > [<c0103a5e>] show_trace+0xf/0x13
> > [<c0103b59>] dump_stack+0x15/0x19
> > [<c0213474>] xfs_trans_cancel+0xcf/0xf8
> > [<c0210e3f>] xfs_rename+0x64d/0x936
> > [<c0226286>] xfs_vn_rename+0x48/0x9f
> > [<c016584e>] vfs_rename_other+0x99/0xcb
> > [<c0165a36>] vfs_rename+0x1b6/0x1eb
> > [<c0165bda>] do_rename+0x16f/0x193
> > [<c0165c45>] sys_renameat+0x47/0x73
> Thanks Jesper. Hmm, lessee - this is a cancelled dirty rename
> transaction ... could be ondisk dir2 corruption (any chance this
> filesystem was affected by 2.6.17's endian bug?)
No. The machine in question never ran any 2.6.17.* kernels. Its old
kernel was 18.104.22.168 (UP), then I tried 2.6.18-rc3-git3 (SMP) as
previously reported, then I tried 2.6.18-rc4 + your XFS patch.
>, or something
> else entirely. No I/O errors in the system log earlier or anything
> like that?
No I/O errors in the logs that I could find, no.
> > I was doing an lvmextend +xfs_resize of a different (XFS) filesystem
> > on the same server at roughly the same time. But I'm not sure if
> > that's related.
> That wont be related, no.
> > I'm currently running xfs_repair on the fs that blew up.
> OK, I'd be interested to see if that reported any directory (or
> other) issues.
It did not.
What happened was this (didn't save the output sorry, so the below is
from memory) ;
When I ran xfs_repair it first asked me to mount the filesystem to
replay the log, then unmount it again, then run xfs_repair again. I
did that. No errors during that mount or umount.
Then, when I ran xfs_repair again it ran through phases 1..n (spending
aproximately 1hour on this) without any messages saying that something
was wrong, so when it was done I tried mounting the fs again and it
said it did a mount of a clean fs.
It's been running fine since.