risks of writing to /dev/random, crediting entropy, RNDADDENTROPY related to untrusted root

Since /dev/{,u}random are writable, would it be possible for an attacker to feed those devices with bad entropy so the attacker could then e.g. decrypt a connection?

If so, it’d best to only allow read access for the same reason we deny write access to SSL certificates.

1 Like

That’s a good question which I will try to answer: afaik, when entropy is fed into a pool from somewhere, the source is not really that important. Assuming the initial seeding was successful, even if the source fed low quality data into the pool, it would not negatively affect the overall quality. There are situations where entropy may be fed into the aggregate pool, and nothing happens–no increase in overall bits are observed. Then sometimes, entropy is fed and the total number of bits increases. As I understand it, it is most likely not possibe to detract from the quality already in there. Assuming the OS boots without blocking that is. There was a similar discussion about rdrand and if it could negatively affect the pool even if it was not completely random; the consensus seemed to be no.
Although, thinking about it some more, could an initial seeding be maliciously fed? With an initial boot time “bad seed,” any crypto derived thereafter would be compromised if in fact such a thing is possible


Our threat model includes even a compromised init from doing a lot of damage.

An attacker may be able to compromise init, feed a large amount of bad entropy into /dev/{,u}random during early boot and then use that to compromise e.g. the connection used to download kernel sources and backdoor them. Signature verification has had vulnerabilities in the past so we don’t want to rely entirely on that.

Although, I’m still not sure whether /dev/{,u}random can have much of an effect on the overall entropy, even during early boot.


/dev/{,u}random don’t seem to be the only problem here. There’s also a few ioctls to manage the entropy pool such as RNDADDENTROPY (which can add more entropy to the pool).


I’m not sure if these require write access to the devices or just the CAP_SYS_ADMIN capability.

Looking at the code, they don’t seem to require write access to the device but I’m not certain.


I don’t really understand RNDADDTOENTCNT, RNDZAPENTCNT and RNDCLEARPOOL. Would zeroing the entropy count zero the entire pool and deplete the system of entropy?

If so, those ioctls are massive risks. We can patch them out if needed.

If only apparmor could filter ioctls like SELinux.

1 Like

Writing to /dev/random is generally considered safe.

Crediting entropy can be dangerous. I.e. syscalls such as RNDADDENTROPY can be dangerous if the randomness added is not secret and/or predictable. Only root can use RNDADDENTROPY. I don’t know if it requires any capabilities. If it does not need any capabilities then indeed this can be an issue in context of Untrusted Root - improve Security by Restricting Root and AppArmor for Complete System - Including init, PID1, Systemd, Everything! - Full System MAC policy.

For testing purposes we might be able to use rndaddentropy - An RNDADDENTROPY ioctl wrapper. Testing this in Whonix / Kicksecure could be hard because entropy counters there are always high due to haveged / jitterentropy-rng (user space daemon and kernel module) / virio-rng. haveged / jitterentropy-rng / virio-rng are “flooding” entropy and entropy counters. That would make experimenting with changes related to entropy hard. An old, deprecated, insecure kernel version in an old virtualizer version (such as VirtualBox), offline, where there is very little entropy might actually be better suited to experiment with entropy counters.

First, we need to understand better what are the legitimate use cases of these syscalls are and what currently legitimate programs are that make use of these. Likely users are haveged / jitterentropy-rng (user space daemon and kernel module) / virio-rng among perhaps kernel internal code. Such patches need to be carefully reviewed. Potential side effects:

  • slow (or even broken) boot because RNDADDTOENTCNT is slow/broken
  • /dev/random entropy starvation after system started
  • low/zero entropy quality of /dev/urandom or even /dev/random
  • broken haveged / jitterentropy-rng (user space daemon and kernel module) / virio-rng among perhaps kernel internal code

If we have good entropy / randomness related questions we can direct these at Stephan Mueller, author of Linux in-kernel Random Number Generator replacement, jitterentropy-rng (kernel module landed in mainline linux), writer of entropy / randomness related research papers for German government. [1]

He may or may not be interested to add capabilities to protect entropy related syscalls. Then these would have a fair chance to land in mainline Linux, I think. Though, I cannot read his mind, but I guess from him it might make sense to wait for his new kernel random system to be merged before adding new features / discussions / controversy on top.



Why bother if while they are here they can see and exfiltrate plaintext form the machine?

OK I hadn’t thought of this before, but they would still need to modify the package sig keychain to fool it into accepting the modified code. We don’t trust server-client connections for package authenticity anyhow. If it’s broken we are screwed.

1 Like

It requires CAP_SYS_ADMIN. We can’t remove that capability though since it’s widely used.

Very little things are actually restricted to uid 0 anymore in Linux, everything is split up into capabilities.

Userspace daemons like haveged do use RNDADDENTROPY AFAIK. We don’t have to patch that ioctl out entirely, but we can restrict it to only the things that need it.

Internal kernel code (including the jitterentropy and virtio-rng modules) don’t use these ioctls. I grepped the entire source tree and nothing uses it.

Signature verification has been bypassed before. We don’t want to rely on it completely. Unencrypted connections are still dangerous.

Even ignoring that, sabotaging the entropy still opens up tons of new surveillance opportunities (e.g. the attacker can eavesdrop on browsing/messaging/etc.) and will weaken system security as a whole.

Root is isolated from other users. Even init doesn’t have access to the user’s home directory for example. More ways to further enhance the isolation was discussed here AppArmor for Complete System - Including init, PID1, Systemd, Everything! - Full System MAC policy

I created a basic kernel patch to test this. It only enables those ioctls when CONFIG_RANDOM_UNSAFE_IOCTL is enabled. To disable them, compile your kernel with CONFIG_RANDOM_UNSAFE_IOCTL disabled.


I did some testing to see if the entropy decreased by a considerable amount.

“entropy” here being measured via the contents of /proc/sys/kernel/random/entropy_avail.

With the ioctls enabled, entropy stays around 1180 - 1250.

With the ioctls disabled, entropy stays around 1220 - 1280.

Weirdly, the entropy seemed to increase with the ioctls disabled but it’s highly likely that’s just a coincidence as the contents of /proc/sys/kernel/random/entropy_avail can fluctuate quite a bit.

The haveged systemd service failed with the message: haveged: RNDADDENTROPY failed!.

The jitterentropy systemd service worked fine.

The virto-rng module seemed to work fine.

There was no noticeable slow down of boot and nothing else seemed to break.

Unless this gets reviews from someone like Stephan or Ted Ts’o and is approved for inclusion upstream, realistically it stands little chance for deployment. Accepting changes with unforeseen consequences to something as sensitive as kernel entropy, is malpractice.


I’m not saying to deploy it now. The patch is just for initial testing to see what happens.



Mixing even fully compromised entropy sources is considered secure in the current Linux kernel implementation. Though, D. J. Bernstein disagrees with that: cr.yp.to: 2014

@3hhh https://github.com/QubesOS/qubes-issues/issues/6941#issuecomment-939260649

Bernstein assumes that you have entropy sources that you trust and some that are less trustworthy. If that’s true, his statement on “stick with the single one you trust and ditch all other input” is correct (and you only need 256 bits or so exactly once).

However the Linux guys assume that you don’t want to ultimately trust any of the entropy sources available to you (or are too uninformed to make the decision) and thus live with a few potential attacks.

I believe the latter is a more realisitic view atm (Linux runs on many “suboptimal” devices). If you build your own hardware RNG and use that, Bernstein’s view is more accurate.

[Imprint] [Privacy Policy] [Cookie Policy] [Terms of Use] [E-Sign Consent] [DMCA] [Contributors] [Investors] [Priority Support] [Professional Support]