Re: 2.6.6-mm5 oops mounting ext3 or reiserfs with -o barrier

From: Jens Axboe
Date: Sun May 23 2004 - 12:32:29 EST


On Sun, May 23 2004, Lorenzo Allegrucci wrote:
> On Sunday 23 May 2004 18:56, Jens Axboe wrote:
>
> > > Untar, read, copy and remove the OpenOffice tarball, each test
> > > run with cold cache (mount/umount cycle).
> >
> > I understand that, I just don't see how you can call it a regression.
> > It's a given that barrier will be slower.
>
> I'm sorry, I didn't know :)
>
> I read from www.kerneltrap.org:
>
> Request barriers, also known as write barriers, provide a mechanism for

[snip]

Ah ok, I see the confusion! ext3 does nothing clever with barriers, it
just uses it for data integrity. A journalled fs is normally not safe to
run on write back cached drives if they aren't battery backed. You could
try reiser instead, it's has a more intelligent use of barriers.

> > > > but yes of course -o barrier=1 is going to
> > > > be slower than default + write back caching. What you should compare is
> > > > without barrier support and hdparm -W0 /dev/hdX, if -o barrier=1 with
> > > > caching on is slower then that's a regression :-)
> > >
> > > hdparm -W0 /dev/hda
> > >
> > > ext3 (-o barrier=0)
> > > untar read copy remove
> > > 1m55.190s 0m27.633s 2m19.072s 0m21.348s
> > > 0m7.081s 0m1.189s 0m0.724s 0m0.083s
> > > 0m6.502s 0m3.244s 0m9.715s 0m1.633s
> > >
> > > ext3 (-o barrier=1)
> > > untar read copy remove
> > > 1m55.358s 0m23.831s 2m16.674s 0m21.508s
> > > 0m7.153s 0m1.200s 0m0.748s 0m0.087s
> > > 0m6.775s 0m3.358s 0m9.985s 0m1.781s
> > >
> > >
> > > haparm -W1 /dev/hda
> > >
> > > ext3 (-o barrier=0)
> > > untar read copy remove
> > > 0m55.405s 0m26.230s 1m28.765s 0m20.766s
> > > 0m7.195s 0m1.199s 0m0.773s 0m0.081s
> > > 0m6.502s 0m3.359s 0m9.672s 0m1.868s
> > >
> > > ext3 (-o barrier=1)
> > > untar read copy remove
> > > 0m52.117s 0m28.502s 1m51.153s 0m25.561s
> > > 0m7.231s 0m1.209s 0m0.738s 0m0.071s
> > > 0m6.117s 0m3.191s 0m9.347s 0m1.635s
> >
> > Your results look a bit over the map, how many runs are your averaging
> > for each one?
>
> Just one run, no averaging.
> Yes, it's not a scientific approach, but I have not enough time
> and this is my production machine :)
> By experience I can say that numbers between each run are quite
> stable and reproducible.

It just looks odd that eg reads vary as much as they do, and that -o
barrier=1 makes -W0 reads faster (and faster then -W1 even). remove
looks reasonable for -W1, but -W0 is still faster. That is _really_ odd.

--
Jens Axboe

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/