Re: Using Yarrow in /dev/random

From: Sandy Harris (sandy@storm.ca)
Date: Mon Sep 11 2000 - 15:09:56 EST


Pravir Chandra wrote:
>
> I've been working to change the implementation of /dev/random over to the
> Yarrow-160a algorithm created by Bruce Schneier and John Kelsey.

For some old discussions on related topics, see:
http://www.openpgp.net/random/

> We've been
> working on parallel development for Linux and NT so that the algorithms are
> matching. The Yarrow 160A algorithm is a variant of Yarrow-160 that has come
> about from discussions with John Kelsey. We've been in contact with him
> throughout our development effort.
>
> In any case, this requires use of a hash function (sha1) and a block cipher
> (3des).

I think (I'm in a minority here) that 3DES would be a mistake. DES was designed
for hardware implementation and is inconvenient to do in software. Schneier's
Applied Crytography says more modern ciphers like CAST-128 or Blowfish are a bit
over 3 times the speed of single DES in software, so roughly 10x triple DES
speed.
Several of the AES candidate ciphers included "faster than DES and more secure
than 3DES" among their stated design goals. My guess would be they've achieved
this.

> We were going to do a replacement of /dev/random (it's nearly finished)

I'd be interested in seeing the code.

> but in retrospect, it seemed that I hadn't looked into the current state of
> incorporating crypto into the kernel. If anyone has any suggestions, comments,
> questions, please email.

I've argued that the current /dev/random has a design flaw. /dev/urandom draws
on the same randomness pool, so heavy use of urandom can cause random to
block.

If I understood him correctly, /dev/random author T'so does not consider
that a serious flaw. If you need lots of random numbers, then you should be
using a userspace library, not either of the random devices. From one of his
messages in a recent thread with subject "/dev/random blocks forever ..."

> My recommendation has always been that applications use /dev/random for
> long-term public key generation, and to seed a pseudo-RNG for session
> keys. The reasoning behind this is that true, high-quality random
> numbers is a very precious resource (even with a real high, quality
> random number generator, it is still a limited resource), and so it's
> better to periodically get randomness from /dev/random, and then use
> that as seeds for a cryptographically secure, pseudo-random number
> generator.

I've written such a generator based on Rijndael cipher, and posted an
early buggy version here. Mail for current version if you want it. It
is a library of routines to be called from an application that requires
a lot of high grade random numbers. It is neither well reviewed nor
well tested yet.

My inclination would be to keep the existing /dev/random entropy
collection code, which I consider superior to Yarrow's. Use it as the
first stage of a Yarrow-like design and add my Rijndael-based code as
the second stage.

For 2.4, Ts'o has changed the design considerably, adding a second
stage (hashing, rather than Yarrow's block cipher) and implementing
"catastrophic reseeding" as recommended in the Yarrow papers. Since
I consider that the most important design idea in those papers, I
suspect that this version of /dev/random answers all my concerns.
I'm not certain because I haven't looked at the code in detail yet.
  
> Also, does anyone have any complaints against incorporating a new
> /dev/random into the kernel?

It would need extensive review and analysis before that should be
considered.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Fri Sep 15 2000 - 21:00:16 EST