I am not sure. Does the kernel do its own debiasing? This is a critical question.
time twuewand --quiet --bytes 1024 --no-debias requires just 11 seconds. That would make it fit (or more likely fit) for doing this by default during boot.
[1] Also writing to /dev/random should never make it less secure. This is a kernel feature.
Unless perhaps if a less secure source of entropy “gets credited”. (see below)
Add some additional entropy to the input pool, incrementing
the entropy count. This differs from writing to /dev/random
or /dev/urandom, which only adds some data but does not incre‐
ment the entropy count. The following structure is used:
struct rand_pool_info {
int entropy_count;
int buf_size;
__u32 buf[0];
};
Here entropy_count is the value added to (or subtracted from)
the entropy count, and buf is the buffer of size buf_size
which gets added to the entropy pool.
I think it is safer to improve the actual entropy but not credit it. The rationale here is if whatever is being developed here won’t ever worsen entropy. The only requisite for that is that assumption [1] is actually true. If entropy was credited and the added entropy was flawed, the security could be actually worsened. Therefore I am following the same strategy that systemd is implementing - adding entropy but not crediting it - just to stay safe. Better for entropy but no gain in performance. [2]
[2] No faster boot times and no better defense against entropy depletion. While the latter is probably rather theoretic nowadays with jitterentropy_rng and haveged already installed by default.
I don’t think there’s a fragile time window. We’re distrusting RDRAND already. Kernel will block/dev/random until ready and of sufficient quality. Any new entropy gathering daemons could block booting until systemd sysinit.target or something even earlier is done.
If we added extra entropy and credited it, the process will be faster but less secure. I don’t think we should make the process faster at the expense for higher risks. That would be possible but would require this solution to generate traction and peer review.
If we added extra entropy and credited it,
in best case: entropy quality increases
in worst case: we waste CPU cycles, increase lines of Whonix source code, waste time and accomplish no entropy quality increase but also no entropy quality degradation.
Main goal: improve entropy quality
Improving boot time (credit entropy): non-goal
Not worsening boot time so that nobody wants to use Whonix anymore: goal
Price to pay: slightly increased boot time / system load
Yes, additionally but not as a replacement. If I remember right, somewhere you wrote “our best bet is to make use of as many entropy sources as we can”? I very much agree with that still.
twuewand is truerand based. twuewand could be replaced with a different implementation based on truerand but better performance.
timer_entropyd by Van Heusden though is not truerand based. At least I cannot find any reference trivially to that in the source code.
clrngd fork is by Van Heusden is truerand based.
In conclusion: 1 implementation based on truerand and other entropy sources based on other devices (preferably) or other algorithms.
I see. I asked the questions before you posted though:
sounds awfully similar to timer jitter concept?
Clock randomness gathering daemon
The Clock randomness gathering daemon gathers system randomness from fluctuations between different physical high-frequency clocks in a system. The randomness is tested with FIPS, and if this is successful, fed into the system entropy pool. It is especially useful for systems without real hardware random number generators.
I am not sure twuewand and timer_entropyd are jitter based. “Something with timers” but still need to learn more.
I don’t understand the differences yet. Which ones are different, how, worth combining, which aren’t.
We are considering twuewand to combat the problem of distros enabling trust of hw cpu rngs which in some cases are broken and output repeating numbers, if malicious outright.
Since this is a TRNG as opposed to a PRNG, could twuewand be used in place of systems that use rdrand? Could a CPU be instructed to not only distrust rdrand, which we do already, but also use twuewand in its place for all applications that need it? For example, take a openvpn server that needs good quality random numbers.
By installing and configuring twuewand, the entropy quality might get better. All applications would benefit from that without further action required.
You think I should add python3 to the top of just the twuewand files? The setup does not like it when I try to build with python3 because of dependencies issues.
Thanks for your patience, as this is new territory
(Read debias. Not Debian. Easily misread and almost wrote Debian myself instead of debias.)
Before I knew that, started creating of two files, each 1 MB big in size, containing entropy created by twuewand without debias. Just sharing for the fun of it since I already produced the results.
twuewand --no-debias --bytes 1000000
Generated 1000000.0 bytes, output 1000000
real 234m
user 1853m
sys 0m34s
And with twuewand with debias.
twuewand --bytes 1000000
Generated 1000000.1 bytes, output 1000000
real 1158m
user 9136m
sys 2m
Debias takes around 5 times longer. And also takes a very long time to create 1 MB of random data using twuewand anyhow.
Stephan’s is a general statement not really applying to twuerand per se:
A Von-Neumann unbiaser is good IFF we have stochastically independent values.
Otherwise, a Von-Neumann unbiaser can be a disaster. Such independence,
however, is commonly not given.
Therefore we can’t conclude de-biasing is useless in that case.
We might have to really specifically ask if “is good” would mean “is required” or “is recommended”.
I don’t know the definition of “stochastically independent” applies to twuewand.
How would we know if output by twuewand has “stochastically independent” values?
Otherwise, a Von-Neumann unbiaser can be a disaster.
This means on the side of caution, don’t use it.
Such independence, however, is commonly not given.
Seems more likely that most entropy inputs aren’t “stochastically independent”. We still don’t know the specific case of twuewand.
He hasn’t opposed this statement of mine:
/dev/random is still world-writeable by any unprivileged user. Simply writing to /dev/entropy should never worsen entropy quality, so the current understanding goes.
Which could be interpreted as free ticket to drop debiasing.
I guess it would need specific review by knowledgeable eyes, otherwise it swill be a vague discussion.
twuerand’s readme has big red warnings about not de-biasing it pretty much says it’s entropy quality will be useless in that case. Better to have less bits generated with something other than von Neumann than no cleaning done whatsoever.
Yeah I don’t doubt that it won;t make entropy worse, but with the effort put into preparing it for our use we should really make it count instead of having an illusion of safety.
Am Donnerstag, 6. Februar 2020, 12:11:52 CET schrieb Patrick Schleizer:
Hi Patrick,
How would I know if any entropy generator produces stochastically independent values?
Per default, assume that they are not independent. Only with an entropy analysis and a rationale why events should be stochastically independent you may assume that it is so.
and twuewand comparing time from CPU with time from RTC.
Stephan Mueller @smuellerDD is the author of jitterentropy-rng who also re-worked Linux kernel entropy replied:
This sounds like a high-res clock being sampled by a low-res clock like ring oscillators do that. I had once played with this idea but discarded it for a reason.
Seems like twuewand got obsoleted by jitterentropy-rng. Since the author of twuewand was also cc’d in that discussion, until there is other information, discarding twuewand as an option.