First I made lots of files/directories with this (quite stupid)
script:
time zsh -c 'while ((a < 10000))
do
x="$RANDOM/$RANDOM"
mkdir -p $x
: >! $x/$RANDOM
((a = a + 1))
done'
After that I first timed 'ls -al':
{2.0.33} [/tmp/foodir]% time ls -al | wc -l
1.20user 0.96sys 0:02.19real 98%CPU (125major+513minor)pagefaults 0swaps
8618
{2.1.114} [/tmp/foodir]% time ls -al | wc -l
1.25user 6.30sys 0:07.56real 99%CPU (125major+511minor)pagefaults 0swaps
8578
Notice how 2.1.114 spends much more time in the kernel (6 times more
than 2.0.33). Overall it is 3 times slower.
Then I tested deleting all those files, and results were similarly bad
for 2.1.x. This time, 2.1 spent slightly less time in kernel, but
overall still was almost 6 times slower:
{2.0.33} [/tmp/foodir]% time rm -rf *
1.50user 33.35sys 1:30.74real 38%CPU (81major+20minor)pagefaults 0swaps
{2.1.114} [/tmp/foodir]% time rm -rf *
1.34user 21.42sys 7:13.88real 5%CPU (84major+20minor)pagefaults 0swaps
If I'm right, this implies that 2.1.114 is very bad for proxy and/or
news servers, who both operate on lots of files, creating/deleting
them at fast rate.
Anybody care to shed some light on this?
Later,
-- Posted by Zlatko Calusic E-mail: <Zlatko.Calusic@CARNet.hr> --------------------------------------------------------------------- Modem sex begins with a handshake.- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.altern.org/andrebalsa/doc/lkml-faq.html