No reports of anything is blocking now (before jitterentropy kernel module gets load) or later. Without any reports of anything blocking, I don’t think blocking would be an issue even if blocking somewhere. Blocking as far as I understand just means slower. Not catastrophic.
In sort libprngwrap enhances the PRNGs from libc. libprngwrap replaces the [s]rand, [s]random and [*]rand48 library calls with functions that get random values from /dev/urandom. This is supposed to be more secure.
Does not sound like a /dev/random vs /dev/urandom thing.
When your Linux system uses a lot of entropy-data from the /dev/random or /dev/urandom device, it might get empty and stall your application (in case of /dev/random) or return less secure data (in case of /dev/urandom).
There we have it again, “less secure data (in case of /dev/urandom”.
Not seeing the benefit of this package over running plain cat /proc/sys/kernel/random/entropy_avail. It provides an easy to read cli interface to see server stats like uptime however.
True. However that is a separate issue. The sake of discussion “/dev/random vs. /dev/urandom” it can be simplified:
in case of Kicksecure / Whonix: just use /dev/random
VideoEntropyd is like timer-entropyd for a ‘video-4-linux’-compatible device. E.g. a tv-card or a webcam.
In our security guide we recommend to disable webcams in BIOS, to cover them or even physically remove them. Therefore worth bothering with it? If a webcam was blocked with a sticker would there be any noise that would generate randomness? The author will probably say we should check for ourselves. Any contributors coming to mind up to test that? Worth it? (Would be done on the host. Not inside VM. Any entropy enhancement of the host also makes VMs benefit.)
EGD - The Entropy Gathering Daemon looks quite old already. Worth contacting its author asking if it could still be useful nowadays? Useful to redirect its output of entropy to /dev/random?
Once there is any generator that generates randomness, I can do the polish. That is packaging, running it at early boot as daemon and redirecting the output of the randomness generator to /dev/random.
I’ll ask but audio_entropyd seems to be a different implementation which doesn’t blindly copy data until it meets sufficient entropy measures
The audio-data is not copied as is but first ‘de-biased’ and analysed to determine how much bits of entropy is in it.
Not clearly known. I’ll need to ask the author to confirm. The wiki description seems to describe differing implementations of a similar concept. I guess each unique algorithm will have its own take of the same source and add more noise?
Then let’s change the recommendation. A lot of damage can be done if attacker roots the host. They can spy using the speakers and even HDD platters and so removing access to devices in the TCB is of marginal benefit.
I don’t mind testing , but we must have some entropy measure to gauge the effectiveness.
I am skeptical about the approach of fetching entropy from remote sources.
What’s the intended benefit of the users and what use do others interpret into it? The intended benefit the by author might be “make my cloud image work” and then the security interested users might think “it’s to improve the security/entropy of my system”. Therefore we’d have to research that and perhaps ask the author if intended/tested use cases and limitations aren’t spelled out yet.
To use https securely (to connect to an entropy source) you need entropy to begin with (SSL session key). Well, I guess against adversaries who don’t try to MITM this might even improve entropy.
Is it worth the added attack surface, risk of exploitation by a compromised entropy source server?
Kernel is supposed to never worsen entropy no matter if a third party predicable source is added to and mixed with other legit entropy sources. However, how much do we want to trust this, add a potentially compromised third party remote source into the mix?
Me too. It’s just a mere curiosity. His package description made me wonder if it is possible to use a unsafe source of randomness and make it safe. I don’t think the added remote attack surface is worth it either, but something to learn about how entropy works.
If that is the case then theoretically the code could be implemented to support untrusted local hwrngs and mitigate all risks while adding value.
clrngd.tar.gz - this patch adds code to clrngd so that it will fork itself into the background. This tar-ball also contains a Makefile (the original distribution did not).
clrngd is a daemon which adds entropy to the kernel entropy-driver-buffers which it creates by looking at the differences between several clocks in your workstation/server
I have written a daemon program which I believe solves this problem. I wanted a name distinct from the existing“Timer entropy daemon” [2], developed by Folkert vanHeusden, so I named minemaxwell(8), after Maxwell’s demon, an imaginary creature discussed by the great physicist James Clerk Maxwell. Unlike its namesake, however, my program does not create exceptions to the laws of thermodynamics.
The timer entropy daemon uses floating point math in some of its calculations. It collects data in a substantial buffer, 2500 bytes, goes through a calculation to estimate the entropy, then pushes the whole load of buffered data into random(4). My program does none of those things.
Uses a related method (jitter in timing data between usleeps) as this module, but inefficient and only suitable for bulk feeding of an entropy pool. Even after von Neumann debiasing, the output has distinct patterns and at most 0.5 bits of entropy per output bit. HAVEGE is a superior overall solution. However, note a number of other links at the site for other sources as well as links to hardware RNGs.
The timer entropy daemon uses timing jitters over sleeps to produce entropy, much like MAXWELL. This demon is even more lightweight than MAXWELL, and makes no attempt to spice things up by doing small calculations, rather it only uses a 100 s sleep wrapped by gettimeofday(2) sampling.