Re: CONFIG_RANDOM (compromise?)

Martin.Dalecki (dalecki@namu23.Num.Math.Uni-Goettingen.de)
Thu, 23 May 1996 19:38:34 +0200 (MET DST)


On Thu, 23 May 1996, Dan Weiskopf wrote:

> one of them. The core kernel developers are already quite sensitive
> to issues of unnecessary code inflation and guard against it as often
> as possible. The argument that this 16kb addition will lead by a
> natural progression to megabytes of worthless code just can't go
> through, although it's a common rhetorical trope.

I doubt. Take a simply look at the bloat of floppy.c or ide.c. Especially
the so called "features" of floppy.c are somehow unusable. I NEVER
managed it to get any *reliable* use of the so called extended formats for
example.

There is also *alot* of redundant assertion code there. And I think that
floppies are not that complicated devies. Just take an eye on FreeBSD,
and how they are doing it (at least with floppies).

Secnd it is not only space that is of concern for me, but also speed!
I did in fact a simple comparision using hdparm -t /dev/hda1.

My Linux system at home is supposendly quite midrange by time now. It is
a P5/90MHz with SIS504 chipset and an overclocked 40MHz PCI-bus.
The main HD is a Quantum fireball 1084. Without random.c it shows a maximal
sustained transfer rate of about 7.47 MB/sec. With it the performace
degrades significantly to 7.13 MB/sec. (really). This may be partially
due to the quite tinny 256kB L2 cache, but this is exactly what's most
common (inserting a hook into the IRQ handling causes additions execution
dislocatoin like nothing)! For this reasons: SIZE is also SPEED.

Marcin