Re: [PATCH] /dev/random: Insufficient of entropy on manyarchitectures

From: JÃrn Engel
Date: Thu Sep 12 2013 - 19:06:34 EST


On Tue, 10 September 2013 15:08:12 -0700, John Stultz wrote:
> Though
> I probably should be hesitant with my suggestions, as I'm not well
> versed in RNG theory.

The basic principle of Ted's RNG is very simple and quite sane:
- You collect as much data as possible, some of which is (hopefully)
unpredictable.
- All the data gets dumped into a small buffer.
- When reading from the buffer, you create a crypto-hash of the entire
buffer. Even if most of the buffer is predictable, the few
unpredictable bits will randomly flip every output bit.
- Half of the hash gets returned to the reader, the other half gets
added back into the pool.

It doesn't matter if you collect predictable data - it neither helps
nor hurts. But you should collect as much unpredictable data as
possible and do it as cheaply as possible. If you want to improve the
RNG, you either collect more data, collect better (less predictable)
data or make the collection cheaper.

JÃrn

--
People really ought to be forced to read their code aloud over the phone.
That would rapidly improve the choice of identifiers.
-- Al Viro
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/