Since I've switched to using 2.4 in situations where /dev/random is
heavily used, I've been seeing more and more of the running issue with
/dev/random.
After a few days of occasional use from sshd and our own cryptographic
purposes, we're seeing entropy_avail go to 0 and requests to /dev/random
block. The processes that block remain killable, but entropy no longer
appears until a reboot is performed.
Robert Love did some /dev/random maintenance a while back, and his
netdev patches are essential for low disk-activity systems. While his
patches have helped the situation greatly, it appears that there is
something in the random code that can cause extraction of entropy to
permanently exhaust the pool. Some kind of issue when entropy is near
zero at the time of a read?
In any case, this is becoming a major pain throughout the many systems
and distibution mechanisms that we're running and at this point I think
it really should be looked at.
I will try to take a look at the code at some point, but I'd really
appreciate it if someone with some previous knowledge of this area of
the kernel could take a look.
This problem has occurred on many many different SMP configurations
(varying procs, motherboards, SCSI, IDE, RAM, etc) for all of the 2.4
series, although Robert's much appreciated fixes a few revs ago helped
quite a bit. Haven't been able to test on UP, since we're exclusively
SMP.
/dev/urandom is indeed an option for _some_ situations, but I'd rather
fix the problem for the good of everyone else, and I'd like to reap the
benefits of /dev/random vs. /dev/urandom.
Thanks much,
-- Ken. brownfld@irridia.com- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Thu Feb 07 2002 - 21:00:12 EST