/dev/random vs. /dev/urandom

So in case of Whonix / Kicksecure we can use /dev/random all the way.

Yes we can. What custom software are you thinking of?

1 Like
  • swap-file-creator
  • tirdad
  • bootclockrandomization
  • uwt time privacy
1 Like

Can chapter Viewpoint: /dev/random is obsolete be deleted? @HulaHoop

What facts are we endorsing if you remove it?

I think we should append what/why using it in Whonix the way we do, is better than a default distro?

1 Like

Viewpoint: better use /dev/random

Already done above on that page?

I think it is important to keep the entire thought process documented for completion. This gives important context for the conclusion with both sides considered.

1 Like

Makes sense.

Viewpoint: /dev/random is obsolete could be marked as obsolete and not kept as if this was still an equally valid position where nobody is really sure which one is right?

Let me know if it’s still contested but I guess Viewpoint: better use /dev/random is correct.

1 Like

Viewpoint: better use /dev/random

I feel in that case it should be compressed into a footnote and cited as obsolete in the writeup " Viewpoint: better use /dev/random". This way it won’t be considered equal or cause confusion.

Re-reading the argument for /dev/random I feel it only helps in a very limited corner case - programs wanting entropy from an empty entropy pool. It does nothing to protect in case of the pool having a garbage or malicious seed.

For the former, GitHub - rfinnie/twuewand: A truerand algorithm for generating entropy would help. I can open a ticket if you decide to package it down the line so the topic doesn’t get buried.

Edit by Patrick:

1 Like

2 posts were merged into an existing topic: Moar Entropy Sources

7 posts were merged into an existing topic: Moar Entropy Sources

2 posts were split to a new topic: twuewand - a truerand algorithm for generating entropy - Whonix integration

Fundamentally, developers shouldn’t be working with low level cryptographic APIs unless they are experts in cryptography. If you’re working with cryptography in your application, use libsodium.


Addressing the /dev/random cargo cult:

Use getrandom(). It’s the recommended way of obtaining random bytes and is used by Go’s crypto/rand, libsodium, hardened_malloc to seed its ChaCha8 CSPRNG, and others.

For older kernels, read a byte from /dev/random, then use /dev/urandom.

Linux 5.6 removed the blocking pool from /dev/random and made it functionally equivalent to getrandom(), but there’s no reason to use it when you can use getrandom().

/dev/random was based on the dated idea that entropy depleted. It isn’t true, and leads to developers devising weird workarounds to solve the problem.

You should remove the whole section that recommends the use of /dev/random for cryptographic use cases.

2 Likes

That section might need an update for Linux 5.6.

Please don’t delete it yet. Feel free to add a link or opinion there. I more time to research this.

swap-file-creator/usr/share/swap-file-creator/swap-file-creator at master · Kicksecure/swap-file-creator · GitHub reads from /dev/random but it’s written in bash so no libsodium.

Seems like this would have fit actually here better:

At first thought seems in a later kernel version getrandom() without GRND_INSECURE makes sense. Otherwise when it’s not possible to use getrandom() keep using /dev/random.

Quote Entropy, Randomness, /dev/random vs /dev/urandom, Entropy Sources, Entropy Gathering Daemons, RDRAND

Proponents of the viewpoint that “/dev/random is obsolete, use /dev/urandom, always” should explain:

  • Why Linux offers both, /dev/random and /dev/urandom and why if it is “really the same” isn’t just a symlink from the one to the other.
  • Why Linux does not use the same code paths for /dev/random and /dev/urandom? Why have this distinction in the first place?

That’s an interesting link. I am going to read it. At first sight, seems it would be good to be added to the collection here:

Also generally that page could use improvements.

I am making this post to avoid over eagerly deletions of anything on that wiki page before I had a chance to catch up.

source: Improve entropy collection in VMs · Issue #673 · QubesOS/qubes-issues · GitHub

Stephan Mueller @smuellerDD is the author of jitterentropy-rng who also re-worked Linux kernel entropy said:

Yes, /dev/random or getrandom(2) blocks until it thinks it has sufficient entropy.

/dev/urandom should NOT be used for cryptographic purposes as it has no guarantee that sufficient seed is available.

author of:

https://www.chronox.de/lrng.html

Documentation and Analysis of the Linux Random Number Generator for the German government (BSI)

I asked:

Relevant quotes…

Would you like to have a look at Entropy, Randomness, /dev/random vs /dev/urandom, Entropy Sources, Entropy Gathering Daemons, RDRAND and like to comment?

More from Stephan Mueller @smuellerDD, source: Efficacy of jitterentropy RNG in Xen · Issue #6 · smuellerDD/jitterentropy-rngd · GitHub

Sure. The entire discussion around /dev/random vs /dev/urandom is quite
convoluted. Starting with 5.6 it should be simplified entirely:

/dev/random (and getrandom(2) when invoked without flags) and /dev/urandom are
identical! The ONLY difference there is can be summarized:

  • /dev/random (and getrandom(2) without the flag to behave like /dev/urandom)
    guarantees that at least 128 bits of entropy is available before it returns
    data (based on the entropy heuristics the kernel applies)

  • /dev/urandom always returns data irrespective whether sufficient seed is
    returned.

Beyond this, there is no difference.

IMHO there is one aspect missing in the kernel: /dev/random should return to
be blocked if it cannot be reseeded with sufficient seed at some time. I have
applied this in my LRNG /dev/random replacement.

The part IMHO there is one aspect missing in the kernel: /dev/random should return to be blocked if it cannot be reseeded with sufficient seed at some time. I have applied this in my LRNG /dev/random replacement. sounds like an awful security regression in linux kernel 5.6.

It isn’t an “awful security regression”. That isn’t how the CSPRNG works. Once you initially seed the RNG it is suitable for cryptographic applications — reseeding adds some forward secrecy but isn’t really needed. The premise behind the CSPRNG is that cryptography isn’t broken, and the only applications that require cryptographically secure numbers are cryptography.

Highly suggest you get your information from reputable cryptographers such as Thomas Pornin, Filippo Valsorda, and Daniel J. Bernstein.

That article while intended to argue “pro /dev/urandom” is actually quite eloquent on why not do do that. Quoting from the article:

Since at least the early 2000s, Linux distributions have applied workarounds to ensure proper entropy at boot time, namely that a boot script injects the contents of a saved file upon boot, and immediately proceeds to regenerate the said file with /dev/urandom. In effect, this transports the entropy across reboots, so that even if the boot sequence was not enough, by itself, to generate enough entropy, the file contents would ensure that everything is all right.

Yes, great but we should scrutinize if that is working. Known cases where systemd-random-seed.service approaches (restore from previously saves entropy seed file) don’t work is the first boot, read-only media (Live DVD)

Article goes on…

there are times when the entropy pool is really empty, namely during the early stages of the boot. At that point, the kernel did not obtain many physical events to work on, and it is conceivable that /dev/urandom output could be predicted.

Exactly this is an issue. So, avoid /dev/urandom, use /dev/random and perhaps getrandom(2) (depending on kernel version).

There are now a few extra relevant points to make:

  • Virtual machines are a challenge to entropy gathering, in at least three ways:
    • They provided access to virtual, emulated hardware only. The nice physical events from which entropy is supposed to come (thermal noise, mostly) are then just a simulation, and that which is simulated can, indeed, be simulated.
    • The hypervisor can prevent access to the cycle counter (rdtsc opcode), which will further hinder attempts by the kernel to get entropy from the (not so) physical events.
    • VM snapshots can be taken and replayed at will; each restart from the same snapshot will use the recorded pool contents.

Indeed. We should make sure that such issues are handled as best as possible.

  • A contrario, sufficiently recent CPU have an embedded hardware generator which is totally available from VM (it’s the rdrand opcode on x86 CPU). The Linux kernel uses rdrand. It does not trust rdrand, because NSA (I’m not exaggerating! The kernel source code explicitly calls out the NSA), so it will not count the rdrand output as worth any entropy. But it will still use it. In all edge cases described above (network boot, VM snapshots…), rdrand will by itself ensure that there is enough entropy for all practical purposes.

The kernel in Debian by default trusts / enables / “credits” entropy by RDRAND by default since Debian buster. I added the reference here: RDRAND

The reference contains links that make the case why RDRAND should not be trusted.

Whonix / security-misc flips the setting to “distrust” RDRAND, disables that entropy from RDRAND are “credited”. In other words, in RDRAND is not “credited” in Whonix. (RDRAND isn’t fully disabled which I am not sure is possible, wasn’t suggested and should not be an issue if the kernel theory holds true that even malicious entropy sources are OK if mixed with legitimate entropy sources).

related forum discussion:
RDRAND - entropy CONFIG_RANDOM_TRUST_CPU yes or no? / rng_core.default_quality

I’ve recently made the argument at Qubes to “distrust” (which means actually just “don’t credit”) RDRAND at Qubes:

Since RDRAND conceptually cannot be the only solution we should look more. → Moar Entropy Sources

the early boot moments we are talking about are before there is any notion of a file; this is really about a single case, which is booting a diskless machine over the network, and mounting the root filesystem from a remote server. The relevant network protocol can need some randomness (e.g. TCP sequence numbers).

I don’t like that.

Should be only done securely and block if necessary blocking by default, or
opt-in, non-blocking, known security risks.

Should be no in between “maybe random, maybe predictable” at any time.

The critical issue is “Once”. If it is conceivable that [random source] output could be predicted in any corner case, that’s an awful security regression.

Entropy quality is a serious issue…

We performed a large-scale study of RSA and DSA cryptographic keys in use on the Internet and discovered that significant numbers of keys are insecure due to insufficient randomness.

Most critically, we found that the Linux random number generator can produce predictable output at boot under certain conditions,

Get information, yes. I am quoting, referencing Filippo Valsorda, Daniel J. Bernstein in /dev/random vs. /dev/urandom and Thomas Pornin in this forum post so was obviously reading their related posts in full. Uncritically do as they say without cross-checking and own considerations, no, because they disagree with each other on some points. Thomas Pornin is uncritical of RDRAND in On Linux’s Random Number Generation | NCC Group Research Blog | Making the world safer and more secure as quoted earlier in this forum post. On the other hand, “D. J. Bernstein isn’t a fan of RDRAND.

https://git.kernel.org/pub/scm/linux/kernel/git/crng/random.git/commit/?id=2ad310f93ec3d7062bdb73f06743aa56879a0a28

1 Like

Excellent news!

Noted in wiki: