Alex Bligh - linux-kernel wrote:
> An alternative approach to all of this, perhaps, would be to use extremely
> finely grained timers (if they exist), in which case more bits of entropy
> could perhaps be derived per sample, and perhaps sample them on
> more operations. I don't know what the finest resolution timer we have
> is, but I'd have thought people would be happier using ANY existing
> mechanism (including network IRQs) if the timer resolution was (say)
> 1 nanosecond.
Why don't we also switch to a cryptographically secure algorithm for
/dev/urandom? Then we could seed it with a value from /dev/random and we would
have a known number of cryptographically secure pseudorandom values. Once we
reach the end of the png cycle, we could re-seed it with another value from
/dev/random.
Would this be a valid solution, or am I totally off my rocker?
Chris
-- Chris Friesen | MailStop: 043/33/F10 Nortel Networks | work: (613) 765-0557 3500 Carling Avenue | fax: (613) 765-2986 Nepean, ON K2H 8E9 Canada | email: cfriesen@nortelnetworks.com - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Thu Aug 23 2001 - 21:00:35 EST