It’s distrusted via CONFIG_RANDOM_TRUST_CPU anyway. See above. The HWRNGs seem to be in the CPU.
They aren’t popular. Hence why they’re staging drivers and not ordinary drivers. They might not even work anyway.
No, the modules can probably be auto-loaded too. These modules are low quality, are more likely to contain security vulnerabilities and likely won’t be used by an average user.
This driver provides kernel-side support for the Random Number Generator hardware found on Intel i8xx-based motherboards, AMD 76x-based motherboards, and
Autoload for those who have such hardware only or autoload thorough some kind of trick by anyone?
Would it be useful for our upcoming hardened-kernel / or generally to enable kernel config options CONFIG_CRYPTO_ANSI_CPRNG? (CONFIG_CRYPTO_ANSI_CPRNG=y)
This cipher is not really relevant any more - it is the ANSI X9.31 Appendix A2.4 RNG. The SP800-90A DRBGs are superior to the ANSI equivalent due to covering reseed requirements and similar. I see no reason to have that RNG enabled. Ciao Stephan
How would I know if popular or not?
Yes, makes little sense in VM indeed.
(I wouldn’t say no sense at all maybe one day Whonix-Workstation could serve as a torified WiFi hotspot but since that doesn’t exist it is not important enough to consider.)
CONFIG_HW_RANDOM_VIA (was included in pull request) is in the motherboard.
For starters, this is the hardware RNG framework making hardware RNGs
accessible via /dev/hwrand (code residing in drivers/char/hw_random).
The Intel RNG is a different RNG than RDRAND, but I am not fully sure which
hardware component would provide it (I guess some form of chipset that even
provides the QAT hardware crypto system).
If a HW RNG driver sets the struct hwrng->quality integer (which defines that
it provides entropy), the HW RNG framework will deliver entropy via
add_hwgenerator_randomness to /dev/random. Otherwise the /dev/random is not
affected by the HW RNG framework and its drivers.
Just do a grep quality in the drivers dir to see which drivers set a value and
thus would increase the entropy in /dev/random. Then decide for yourself
whether you want that.
As stated above, it would not contradict it as the noise source would be
completely different (dedicated hardware).
Does random.trust_cpu=off cover hardware random generators that are
located at the motherboard and not inside the CPU? (CONFIG_HW_RANDOM_VIA
says it is on the motherboard.)
No.
Is CONFIG_HW_RANDOM_INTEL “same as” RDRAND?
No, see above.
In other words, would CONFIG_HW_RANDOM_INTEL=n result in disabling RDRAND? (I am not
suggesting to configure CONFIG_HW_RANDOM_INTEL=n. This is just for better
understanding.)
My worry is that by having these hardware random generators load earlier
than the module could be load could result in the initial random seeding to
be more likely compromised if these hardware randomness generators are
flawed / predictable / backdoored.
Anecdotally I am seeing Realtek Wifi in a bunch of laptops across different brands.
Also Wikipedia indicates it has major marketshare though those stats are a bit dated :
Notable Realtek products include 10/100M Ethernet controllers (with a global market share of 70% as of 2003) and audio codecs (AC’97 and Intel HD Audio), where Realtek had a 50% market share in 2003 and a 60% market share in 2004, primarily concentrated in the integrated OEM on-board audio market-segment.[3] As of 2013 the ALC892 HD Audio codec and RTL8111 Gigabit Ethernet chip have become particular OEM favorites
There have been cases of people bricking their computers by accidentally deleting EFI variables. An attacker might be able to do far more by writing specific things to them.
This option enables normal printk support. Removing it eliminates most
of the message strings from the kernel image and makes the kernel more
or less silent. As this makes it very difficult to diagnose system
problems, saying N here is strongly discouraged.
We don’t have any real data. Can only speculate. A google search for the name of the wifi device plus “linux forum” brings up lots of search results. Search terms:
“RTL8192U” linux forum
“RTL8188EU” linux forum
“RTL8712U” linux forum
“RTL8723BS” linux forum
“RTL8192E” linux forum
People are using these devices. Therefore I think it’s better to keep these modules for wifi devices.
That also shows that the approach of 1 kernel config for the host (and another 1 for all VMs) covering it all cannot be perfect. Either hardware support becomes more broken than Debian default (which is already not working for many people) or it’s not a minimal amount of kernel modules.
Maybe the Clip OS approach of generating the kernel config on the user’s system (?) would bring more optional results. But probably also at higher development and maintenance efforts. And even then we’d have to decide on a default which gets used when users boot the system for the very first time.
For hosts: Maybe kernel re-compilation could be done at first boot of system or during installation to be tailored for the system. But all of this is far fetched in future.
That’d be no different than patching out dmesg AFAIK. I guess the only advantage with patching it out is we can add a sysctl for superroot to change it at runtime.
We need reasonable certainties. Otherwise we’d produce stuff which is
later going to be unusable. EFI booting is hard on its own. Adding
disabling of kernel EFI config to the mix makes it even harder.
As long as grub isn’t installed while using hardened-kernel it should
be alright?
How would grub not be installed? It’s a default package in Debian,
Whonix, Kicksecure. You probably mean only required during grub-install? Well, at some point one needs to run grub-install. For
example when booting a Whonix or Kicksecure Live ISO (to be created in
future) it would hopefully boot a hardened kernel. When then running the
installer, grub-install needs to function. Otherwise at that point
there would be a hard to debug and fix issue.
I see a lot other stuff more important before a hardened kernel can take
off or details such as EFI variables kernel config become a question.
automated builds and tests of kernel (boot in VM and also on bare
metal, run tests, aka automated test suite, for both BIOS and EFI boot,
such a installation, upgrades, whatnot) - if that was sorted, if the
testing process was automated and robust, if we knew if changes break
things, it would be a lot easier to make more changes.
I’d say better get it widely deployed first and then do fine tuning.
Otherwise if we ship something that doesn’t work for most users we end
up keeping using the standard kernel.