Re: ext2fs "performace"

Harald Koenig (koenig@tat.physik.uni-tuebingen.de)
Fri, 21 Jun 1996 07:59:59 +0200 (MET DST)


> A 1 GB file on a 1k block ext2 filesystem will have 4096 indirect
> blocks and a few dindirect blocks. Deleting the file will involve
> essentially doing a random-access seek and read of each of these
> blocks, so if it takes 100 seconds you are getting over 40 seeks/reads
> per second.

sounds reasonable (sigh;-) and matches the Bonnie result (my bonnie version
is from 1991 if this matters):

# time /src/Bonnie/Bonnie
File './Bonnie.1200', size: 104857600
Writing with putc()...done
Rewriting...done
Writing intelligently...done
Reading with getc()...done
Reading intelligently...done
Seeker 1...Seeker 3...Seeker 2...start 'em...done...done...done...
-------Sequential Output-------- ---Sequential Input-- --Random--
-Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU
100 1494 93.0 4652 77.6 1988 70.7 1488 93.3 5193 77.5 55.0 8.7

5:21.76 real, 101.61 user, 102.42 sys, 63% cpu

first I thought it's not possible ramdom seeks take all the time since
usually I can hear such seeks on this disk quite good but removing
this big file was almost unhearable.
mounting this fs (which takes 11sec btw) or running the Bonnie seek test
can be "heard" with no problem at all.

Harald

-- 
All SCSI disks will from now on                     ___       _____
be required to send an email notice                0--,|    /OOOOOOO\
24 hours prior to complete hardware failure!      <_/  /  /OOOOOOOOOOO\
                                                    \  \/OOOOOOOOOOOOOOO\
                                                      \ OOOOOOOOOOOOOOOOO|//
Harald Koenig,                                         \/\/\/\/\/\/\/\/\/
Inst.f.Theoret.Astrophysik                              //  /     \\  \
koenig@tat.physik.uni-tuebingen.de                     ^^^^^       ^^^^^