Overview of Mobile Projects - That focus on either/and/or security, privacy, anonymity, source-available, Freedom Software.

No. There is no hard technical requirement for that. Lower levels (such as firmware) and hands over control to the next level (such as bootloader). If any stage is broken, then verified boot is broken. The levels are approximately: hardware trust root → firmware → (shim ->) (grub) bootloader → (linux) kernel → kernel modules → initrd → (systemd) init.

For example, you can have verified boot SecureBoot (actually RestrictedBoot) on the Intel/AMD64 platform. Windows uses it.

As for Linux desktop distributions, I don’t think any offers full verified boot support yet. But that partial verified boot process is interesting to use an as example how this could be implemented. The firmware has a built-in key (by Microsoft). It verifies shim and if the signature checks out, it hands over control to shim. Next, shim verifies grub and hands over control to it. Next, grub verifies the kernel. Then it should verify the initrd but that’s not implemented in any Linux desktop operating system as far as I know. (Chromium OS …) That’s where the verified boot implementation stops for now.

So for most Linux desktop distributions currently the verified boot isn’t very useful. It’s a development progress start but it’s incomplete. The takeaway message is, that any level during the boot process can implement its own key management.

For example, on Linux systems using verified boot, users have to manually sign kernel modules so these can be load. But what does that tell us? That users can add their own keys which the Linux kernel accepts to verify kernel modules. There’s no need to add user custom key to the firmware just to load custom installed kernel module (such as the VirtualBox kernel module or LKRG).

Consider a design where the current way to implement verified boot is called the “the unbreakable stage1 verified boot”.

This stage1 verified boot (immutable, verity-protected, signed file system) could verity a stage2 image. If it’s modified it could report it and refuse to boot (unless some kernel parameter is set by user but this needs more description).

Similar to Android A/B (Seamless) System Updates concept where when the upgrade is supposedly atomic. If it fails, it reverts to the previous one. Two images. The old one and the new one. And these don’t break verified boot either.

This blog post goes into that direction: Fitting Everything Together
But while probably well intentioned, the “Developer Mode” described in that blog post could also lead to further user freedom restrictions.

At the moment the big surveillance corporate complex cannot yet lobby the governments to mandate that only big corporate signed full verified boot operating systems can boot. But if most devices are verified boot anyway and only a few Linux desktop computers remain while they’re one day also only available with full verified boot… At some point there’s not much devices left where general computing is possible without the approval (signature) of a third-party. And then a law mandating that only corporate signed full verified boot operating systems can boot becomes more likely.

Indeed. It’s mentioned in the Android vs iPhone table footnotes.

I am referring to the following. It’s in the table, footnotes…

Quote <uygulama>  |  Android Developers

android:allowBackup

Whether to allow the application to participate in the backup and restore infrastructure. If this attribute is set to false, no backup or restore of the application will ever be performed, even by a full-system backup that would otherwise cause all application data to be saved via adb. The default value of this attribute is true.

Currently on most Androids (which have verified boot + refuse root rights to the user), if an application is configured by the vendor allowBackup=false, then no backups are possible unless verified boot is broken and/or the device has been rooted.

If no backups are possible, then it’s also not possible to inspect the data. In essence, the app vendor gets a bit of private storage on the user’s device which the user cannot access.

Operating system and app vendors dictating such restriction to users is a bad direction to go specifically if more and more devices are already locked out the user in that way (most phones used my most users, smart TV, tables and whatnot) and are already developing towards that direction and the few remaining ones, which are desktop computers are already on the decline as well as going into the same direction. Referring to Intel/AMD64 platform with RestrictedBoot.

The the question boils down, which one does one support. Power to the hardware/software vendors or power to the users.

No. Verified boot unfortunately isn’t a cure for all the security threats. If verified boot is broken, then it’s broken. Meaning, then malware can persist.

Any locked Android where the modding community managed to found a vulnerability to exploit it to free/unlock the bootloader is evidence that verified boot has been broken in past. To illustrate:

  • When the modding community broke the bootloader then they did it to gain control over their hardware.
  • When Advanced Mobile Phone Spyware breaks verified boot / the bootloader, then they do this to spy on the user why at the same time keeping the user locked out from auditing. Malware uses technically similar methods to how the modding community sometimes archives unlocking of the bootloader. This then it invalidates any security advantages by verified boot. And that is then the worst of both worlds. Malware has the ability to spy on the user while the user still has no access ability to perform an audit of their device.

User freedom and security auditing is crucial in security. Verified boot and locking the user out is “trust us”.

It mostly doesn’t. Security researchers have to invent complex hacks to get around the obfuscation and locks. Requires bootloader unlock. But if bootloader unlock isn’t possible without user data wipe then auditing is prevented. Here’s a research paper that briefly mentions research hurdles vs user locked-out bootloaders as well as the research methods.

Quote:

Reverse Engineering

A fairly substantial amount of non-trivial reverse engineering is generally required in order to decrypt messages and to at least partially decode the binary plaintext. 1) Handset Rooting: The first step is to gain a shell on the handset with elevated privileges, i.e. in the case of Android to root the handset. This allows us then to (i) obtain copies of the system apps and their data, (ii) use a debugger to instrument and modify running apps (e.g. to extract encryption keys from memory and bypass security checks), and (iii) install a trusted SSL root certificate to allow HTTPS decryption, as we explain below. Rooting typically requires unlocking the bootloader to facilitate access to the so-called fastboot mode, disabling boot image verification and patching the system image. Unlocking the bootloader is often the hardest of these steps, since many handset manufacturers discourage bootloader unlocking. Some, such as Oppo, go so far as to entirely remove fastboot mode (the relevant code is not compiled into the bootloader). The importance of this is that it effectively places a constraint on the handset manufacturers / mobile OSes that we can analyse.

Unbreakable verified boot + user locked out = no auditing possible for users. They are then only users, guests, non-administrators on their own devices.