RE: kernel performance issues 2.4.7 -> 2.4.17-pre8

From: Needham, Douglas (douglas.needham@lmco.com)
Date: Fri Dec 14 2001 - 07:53:18 EST


Thanks for the feed back.
Here are my latest results re-running the same tests with the following
enhancements.
I also added .17-rc1.

I did two things :
the first was :

        echo 70 64 64 256 30000 3000 80 0 0 > /proc/sys/vm/bdflush
the second was :
        hdparm -X66 -d1 -u1 -m16 -c3 /dev/hda
        following the document at :
http://linux.oreillynet.com/lpt/a//linux/2000/06/29/hdparm.html

I did see some performance gains, but

My new questions are :
        Do we(people running Linux) need to do more work on tuning the
hardware in the current kernels?

>Note: before running the hdparm test on hda1, you should mount a 4k
blocksize
>filesystem onto hda1.
      Where could I find more info on how to do this? Wouldn't changing the
blocksize of my file system kill my existing data? Or do I just need to
create some filesystem on the device that has a 4k blocksize? I hate to ask
a dumb question, but I had not heard of this being done before.

Thanks,

Doug

-----Original Message-----
From: Andrew Morton [mailto:akpm@zip.com.au]
Sent: Thursday, December 13, 2001 2:50 PM
To: Needham, Douglas
Cc: linux-kernel@vger.kernel.org
Subject: Re: kernel performance issues 2.4.7 -> 2.4.17-pre8

"Needham, Douglas" wrote:
>
> ...
> Overall I discovered that the Red Hat modified kernel beat the
stock
> kernel hands down in throughput. Both the base Red Hat 7.2 kernel and the
> 7.2 update kernel (2.4.7-9, 2.4.9-13 respectively) had far better
throughput
> than the .10, .15, .14, .16, and .17-pre8 kernels.
>

The 60% drop in bonnie throughput going from 2.4.9 to 2.4.10 indicates that
something strange has happened. This hasn't been observed by others.

My suspicion would be that something is wrong with the IDE tuning in your
builds of later kernels. Please check this with `hdparm -t /dev/hda1' -
make
sure that these numbers are consistent across kernel versions before you
even start.

Note: before running the hdparm test on hda1, you should mount a 4k
blocksize
filesystem onto hda1. This changes the softblocksize for the device from 1k
to 4k and, for some devices, speeds up access to the block device by
a factor of thirty. This is some bizarro kooky brokenness which the
2.4.10 patch exposed and I'm still investigating...

For dbench, errr, just don't bother using it, unless you're using
a large number of clients - 64 or more. At lower client numbers,
throughput is enormously dependent upon tiny changes in kernel
behaviour. Try this:

        echo 70 64 64 256 30000 3000 80 0 0 > /proc/sys/vm/bdflush

and see the numbers go up greatly.

-
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Sat Dec 15 2001 - 21:00:28 EST