Getting linux to buffer /dev/random -


i need reasonable supply of high-quality random data application i'm writing. linux provides /dev/random file purpose ideal; however, because server single-service virtual machine, has limited sources of entropy, meaning /dev/random becomes exhausted.

i've noticed if read /dev/random, 16 or random bytes before device blocks while waits more entropy:

[duke@poopz ~]# hexdump /dev/random 0000000 f4d3 8e1e 447a e0e3 d937 a595 1df9 d6c5 <process blocks...> 

if terminate process, go away hour , repeat command, again 16 or bytes of random data produced.

however - if instead leave command running same amount of time, much, more random data collected. assume on course of given timeperiod, system produces plenty of entropy, linux utilises if reading /dev/random, , discards if not. if case, question is:

is possible configure linux buffer /dev/random reading yields larger bursts of high-quality random data?

it wouldn't difficult me buffer /dev/random part of program feel doing @ system level more elegant. wonder if having linux buffer random data in memory have security implications.

sounds need entropy deamon feeds entropy pool other sources.


Comments

Popular posts from this blog

php - What is the difference between $_SERVER['PATH_INFO'] and $_SERVER['ORIG_PATH_INFO']? -

fortran - Function return type mismatch -

queue - mq_receive: message too long -