I mean that’s kind of the point of read-only, the state cannot be saved. It could be a conscious design decision. I think this is very good in case someone is using Live Whonix they can’t save the state and defeat the amnesic protection.
That’s the way to go given the limitations IMO.
Yes they are stored on disk in a binary format though. Only one qemu can understand.
How could we prevent this? Is there some way the bootloader on the main drive can detect it was booted without the protected one and flash a big neon warning on the splash screen?
I think from perspective of a virtualizer snapshot of read-only mode makes sense. The point of read-only mode isn’t necessarily amnesia. One might experiment with an ISO for debugging purposes and wishing to revert to previous states to experiment from one thing to the next one over and over.
No realistic ones. A custom VM GUI. A fork of virt-manager or rewrite. Unrealistic. Users who wish to debug or customize without keeping security in mind are more likely to shoot their own feet.
That warning would only be useful for educational purposes. In case of a unverified boot of a maliciously altered kernel, malware could just disable that warning.
For education purposes, a systemd unit could check at somewhat early/middle boot if the initial boot medium is attached. And if not, create a state file. Once the GUI (X) is started, a warning popup could be shown. Or all of it could be implemented even in whonixcheck. There’s no need to do this at the bootloader stage since by the time it’s not security relevant anymore anyhow.
To be posted against grub2 upstream as well as against Debian.
Please comment on / rewrite / improve this draft.
grub-pc check_signatures=enforce support (BIOS) (non-EFI)
Could you please make it possible to do signature verification with grub-pc too?
We, the maintainers of Linux distributions that primarily run inside VMs (Whonix; Kicksecure) would like to implement verified boot. Not necessarily Secure Boot.
At the moment, there are no tools that can create VM images (with Debian Linux) which support EFI booting. Also, support by virtualizers such as KVM, Xen, VirtualBox for Secure Boot is either non-existing or undocumented.
Another reason is, that inside VMs we don’t necessarily need the complexity of EFI.
Instead we could boot unverified (usual virtual BIOS legacy boot) from a virtual, read-only (write protected) boot medium (such as ISO). That boot loader on the initial boot disk (grub2) could then verify and chainload the boot loader (grub2) on the main disk. In result, we would have a verified boot sequence.
 Note that this doesn’t do much against an adversary with a kernel 0day.
It’s not meant to.
This should be effective against an adversary that gains physical access to a
device, yet cannot tamper with the live system (by plugging in a device that
exploits a buggy driver, by messing with the memory bus or a DMA-capable
interface, …) and cannot replace the firmware.
As you can see, this does not outright prevent evil-maid style attacks:
the goal here is to make such attacks harder/less practical.
An interesting comment which I don’t agree fully with but it raises and interesting point about initrd making this a kinda pointless exercise:
Lets say it would help to secure a system with enabled encryption. This might help when there is no way to get a custom signed binary. Then maybe bitkeeper would be a tiny bit more secure. I doubt that for Linux solutions as the logic is in the initrd. You could still modify that even if you could not load a modified kernel module (i still want to see that working). It is very unlikely that you can sign your initrd or you have to store the public key for that unencrypted somewhere. So what did you gain this time? Maybe a tiny bit in the case that you could not sign your own binaries. If you can do and use an initrd, that will be the weak point (and it was the weak point before as well).
So what can you do? Rely on hardware encryption if you need full security. Forget secure boot, it will not be more secure. All you can use it for is that you can not boot other systems that easyly (just like on the arm plattform).
We could boot from a virtual, read-only (write protected) boot medium such as another virtual HDD or ISO. Such a boot medium which runs a minimal linux distribution which then compares against checksums from Debian repository on the main boot drive:
The MBR (master boot record)
The VBR (volume boot record)
[A] the booloader
[B] the partition table
[C] the kernel
[D] the initrd
[E] all files shipped by all packages
There are tools that can help with checking all files on the hard drive such as debsums . However, while debsums is more popular, it is unsuitable. 
A tool such as debcheckroot might be more suitable for this task.
During development of Verifiable Builds experiences were made with verification of MBR, VBR, bootloader, partition table, kernel and initrd. Source code was created to analyze such files. 
Extraneous files would be reported, with option to delete them, to move them to quarantaine and/or to view them.
Initrd is by Debian default, auto generated on the local system. Hence, there is nothing to compare with from Debian repository. However, after verification of everything (all files from all packages) it would be secure to chroot into the verified system and to re-generate the initrd. Then to compare both versions. This might not be required if initrd can be extracted and compared against files on the root disk.
That boot medium (such as IOS) could be shipped on Whonix Host through a deb package /usr/share/verified-boot/check.iso .
Disadvantage of this concept might be that it might be slower than dm-verity. On the other hand the advantage of this concept is that this does not require a OEM image. Also it might be more secure since it does not verify against an OEM image but would verify the individual files. Another advantage is that users are free to install any package and not limited by a readonly root image. Users do not have to wait for the vendor to update the OEM image.
Absolutely brilliant. I dont think we should judge performace just yet without having tried it.
How about splitting the process so that the most lowlevel essiential components are checked before boot and the rest can be done later after the important components are given the green light - during system run?