Re: /dev/random in 2.4.6

From: Gérard Roudier (groudier@free.fr)
Date: Mon Aug 20 2001 - 14:30:05 EST


On Mon, 20 Aug 2001, Chris Friesen wrote:

> Alex Bligh - linux-kernel wrote:
>
> > An alternative approach to all of this, perhaps, would be to use extremely
> > finely grained timers (if they exist), in which case more bits of entropy
> > could perhaps be derived per sample, and perhaps sample them on
> > more operations. I don't know what the finest resolution timer we have
> > is, but I'd have thought people would be happier using ANY existing
> > mechanism (including network IRQs) if the timer resolution was (say)
> > 1 nanosecond.
>
> Why don't we also switch to a cryptographically secure algorithm for
> /dev/urandom? Then we could seed it with a value from /dev/random and we would
> have a known number of cryptographically secure pseudorandom values. Once we
> reach the end of the png cycle, we could re-seed it with another value from
> /dev/random.
>
> Would this be a valid solution, or am I totally off my rocker?

The latter, unless you only want to protect against lame attackers :-)

Given the knowledge of the seed and the algorithm used, everything gets
fully deterministic for an attacker -> enthropy _zero_.

For example, let an attacker observe enough of your magic random data in
order to guess the algorithm, and a whole prng cycle will only contain as
many random bits as the number of bits of the seed value for this
attacker.

> Chris

  Gérard.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Thu Aug 23 2001 - 21:00:37 EST