Re: reiser4 plugins

From: Jim Crilly
Date: Mon Jun 27 2005 - 23:27:32 EST


On 06/28/05 06:03:25AM +0200, Prakash Punnoor wrote:
> Jim Crilly schrieb:
> > On 06/28/05 01:06:54AM +0200, Prakash Punnoor wrote:
> >
> >>So I gave ext3 a try. Very robust, but at the same time slooow. I couldn't
> >>bear it after some months. So I gave xfs another try. Yes, now it felt much
> >>better. Still not that fast as reiserfs, IIRC, but better than the first time
> >>I tried. I am still having xfs on / and it works pretty well, and is rather
> >>robust against hard locks with about the same amount of data losing as
> >>reiserfs. But what annoys me very much, is that I have to run xfs_repair by
> >>hand and by booting from another partition. Even after a hard lock, the
> >>partition mounts w/o problems and everything seems OK, but it only seems like
> >>that. In fact after some hours/days of use, you'll notice oddities, like files
> >>or directories which cannot be removed and things like that. After running
> >>xfs_repair everything is back in order.
> >
> >
> > I don't know what was going on with your systems, but I've been using XFS
> > since the original 1.0 Linux release from SGI and I'd guess that I've had to run
> > xfs_repair less than 10 times and most of them were on Alpha and Sparc64
> > before issues with those arches got ironed out.
>
> Perhaps it is due to the fact that I use xfs on software RAID-0 and both HDs
> have 8MB cache write-back enabled? So, all in all 16MB needs to be commited
> on/before lock-up, maybe too much for xfs? (This situation was no prob for
> ext3, though. Thinking again, I never used reiser V3 or V4 on the RAID-0, so
> my comparison might not have been fair.)

Maybe, I've never used XFS on software RAID-0. The Sparc64 I mentioned has
it's XFS filesystem on software RAID-1 and that's introduced no problems, but
I've seen way too many drives die to risk running RAID-0.

>
> Prakash

Jim.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/