enable Linux kernel gpg verification in grub and/or enable Secure Boot by default

Not much Ubuntu specificity and general principles might be learned from this software:

1 Like

The security of Secure Boot

2 Likes

An interesting comment which I don’t agree fully with but it raises and interesting point about initrd making this a kinda pointless exercise:

Lets say it would help to secure a system with enabled encryption. This might help when there is no way to get a custom signed binary. Then maybe bitkeeper would be a tiny bit more secure. I doubt that for Linux solutions as the logic is in the initrd. You could still modify that even if you could not load a modified kernel module (i still want to see that working). It is very unlikely that you can sign your initrd or you have to store the public key for that unencrypted somewhere. So what did you gain this time? Maybe a tiny bit in the case that you could not sign your own binaries. If you can do and use an initrd, that will be the weak point (and it was the weak point before as well).

So what can you do? Rely on hardware encryption if you need full security. Forget secure boot, it will not be more secure. All you can use it for is that you can not boot other systems that easyly (just like on the arm plattform).

Date: 2012-07-25 02:50 pm (UTC) From: [personal profile] mjg59

You’re completely right, secure boot does not prevent attacks that it is not intended to prevent.

grub bug #56887 grub-PC check_signatures=enforce support (non-EFI).

Hash Check all Files at Boot

Higher security level as Secure Boot.

Talking about VMs only in this concept.

We could boot from a virtual, read-only (write protected) boot medium such as another virtual HDD or ISO. Such a boot medium which runs a minimal linux distribution which then compares against checksums from Debian repository on the main boot drive:

  • The MBR (master boot record)
  • The VBR (volume boot record)
  • [A] the booloader
  • [B] the partition table
  • [C] the kernel
  • [D] the initrd
  • [E] all files shipped by all packages

There are tools that can help with checking all files on the hard drive such as debsums . However, while debsums is more popular, it is unsuitable. [2]

A tool such as debcheckroot might be more suitable for this task.

During development of Verifiable Builds experiences were made with verification of MBR, VBR, bootloader, partition table, kernel and initrd. Source code was created to analyze such files. [3]

Extraneous files would be reported, with option to delete them, to move them to quarantaine and/or to view them.

Initrd is by Debian default, auto generated on the local system. Hence, there is nothing to compare with from Debian repository. However, after verification of everything (all files from all packages) it would be secure to chroot into the verified system and to re-generate the initrd. Then to compare both versions. This might not be required if initrd can be extracted and compared against files on the root disk.

That boot medium (such as IOS) could be shipped on Whonix Host through a deb package /usr/share/verified-boot/check.iso .

Disadvantage of this concept might be that it might be slower than dm-verity. On the other hand the advantage of this concept is that this does not require a OEM image. Also it might be more secure since it does not verify against an OEM image but would verify the individual files. Another advantage is that users are free to install any package and not limited by a readonly root image. Users do not have to wait for the vendor to update the OEM image.

1 Like

Absolutely brilliant. I dont think we should judge performace just yet without having tried it.

How about splitting the process so that the most lowlevel essiential components are checked before boot and the rest can be done later after the important components are given the green light - during system run?

1 Like

It’s not entirely based on theory.

Using debsums (which that actual implementation should not use as explained in the concept) (also since it uses md5sums) - which is good enough for a quick performance test…

time sudo debsums -s

real 1m37.632s
user 0m27.825s
sys 0m11.242s

To add to that time:

  • time to boot into the verification system
  • some other stuff (initrd, bootloader, …) but maybe these are negligible
  • time to boot into the actual system

Any code executed could fake the results of verification. Seems hard to make the boot only really execute any files which are already verified. A lot more complex.

1 Like

Some interesting projects that may help:

2 Likes

A lot to research.

https://docs.puri.sm/PureBoot.html

https://docs.puri.sm/PureBoot/Heads.html

http://osresearch.net/

Bootstraping a slightly more secure laptop (33c3) - YouTube

1 Like

“PureBoot” is mainly just coreboot and heads along with a few other things.

Coreboot and Heads are BIOS replacements and probably not able to be used in VMs.

I wouldn’t really use Purism as a good source either.

2 Likes

Provided feedback about debcheckroot.

2 Likes
1 Like

Possible to use check_signatures=enforce on non-EFI such as amd64?
https://lists.gnu.org/archive/html/help-grub/2019-12/msg00006.html

2 Likes

Guaranteeing initramfs integrity during Secure Boot

2 Likes

maybe this could work:
Assuming FDE on host, for Whonix encrypt /boot by default and have users enter a passphrase at grub to unlock it? Now nothing can get at initrd or kernel. then unmount /boot after login
One thing : Grub 2 does not support LUKS 2

1 Like

That doesn’t fly easy. /boot, kernel, initrd needs to be updated. It regularly gets updated through apt dist-upgrade. Unmounting /boot would break that.

The suggest approach could work inside VM if the host managed the VMs disk /boot parition but that is cumbersome having the host mount the VMs disk and modify it. Also security risk of the host mounting the VM disk.

KVM VMs could specific kernel, initrd, kernel parameters on the host by pointing to files available on the host. That could make kernel and initrd trusted inside the VM (assuming the host to be trusted). That would be similar to Qubes dom0 kernel vs Qubes VM kernel. Qubes dom0 kernel has some disadvantages and it’s planned to make that a non-default, i.e. make VM kernel the default.

1 Like