Whonix live mode / amnesia / amnesic / non-persistent / anti-forensics

If users install this package manually, shouldn’t booting live the default? Then there is less chance to mess up.

(Easy change probably. Just rename 11_ to 09_ or so.)

However, if we install grub-live by default in Whonix 15, it may not be that great to boot live by default. Or maybe it would be? Perhaps there should be a first run setup choice where the users defines if live or non-live boot should be the default?

This pre-supposes a good solution to indicate if the system was booted live or persistent to the user gets implemented. Anyhow. Lots of potential here.

I didn’t test it on bare metal (we are running on VM’s always anyhow) but the hash values of the images didn’t change when using live-mode. I’d also not use it on the host with overlayfs since you need really really large amounts of RAM except you also write protect the VM images.

I’d use persistent mode as default since users are used to that and it is easy to change the default boot mode. The first boot should also be persistent due to setting the entry guards, general whonix scripts, updates …

I recently found that small little icon of sdwdate in the system tray. However, in contrast to the person that put it there, I’m not that familiar with qt etc. so … ^^

2 Likes

Btw I never got, If we protect the VM images, why do we use grub-live?

Grub-live tells the initramfs to mount an overlayfs filesystem in RAM where data is written to. If you just write protect the VM image there is no way to write data to the disks and many programs will not work correctly. For VirtualBox write protection works differently and you could actually use immutable disks without grub-live but in this case data gets written to a snapshot of the disk on the host which will deleted after the next boot. Since it gets written to the disk and not to RAM if might be easier to recover data.

2 Likes

It’s been a while since I tested a normal debian installation in live mode with full disk encryption. IIRC on jessie this combination did not work. Now on testing and stretch it works. You just need to append “boot=live plainroot” to the kernel command line. For debian stretch you additionally should add the “nofail” option to /etc/fstab for the swap and the boot partition. I also had to chmod the VM images to libvirt-qemu:libvirt:qemu. You can then install the Whonix VM images. I tested it with the recent KVM developers version. Before you boot the host os live you also need to install the grub-live package and dependencies in the Whonix VMs and set them to read only afterwards. Then you can boot the host as a live system. Of course you can also use other VMs.
So a workflow for the security/privacy minded user could be: configure the host and VMs to your needs, install updates from time to time in persistent mode. Switch to live mode e.g. while browsing. If you still need some kind of persistence like for saving files you could always attach an USB stick to the Workstation. Your host OS and VMs should always remain unchanged after a reboot. If you want to be sure you can either create a checksum of the image and/or use some storage device with hardware write protection.
Overall the setup would be similar to Tails at least from the amnesia side. While debian testing is not that fast with security updates some users could still consider using it for better hardware support.
Copying everything to RAM does currently not work and would require some patches to the live boot scripts. This option would also maybe make not that much sense since you would already need ~4GB RAM to just hold the a minimal debian host + Whonix VM installation.

It won’t be possible to have the VMs only in non persistent mode but the host function normally? That would have been great because persistence would have been easy as just copying files to a VM hared folder. It would also have made a live etup much more convenient than the status quo of live cd environments.

Sure you can have the VMs in non persistent i.e. live mode only and the host being persistent. This was possible before and still is. As long as you don’t use “plainroot boot=live” during boot of the host OS the host will always remain persistent. You still can boot the VMs as a live system as described in the wiki. The whole point, for me at least, was to have this configurable and have FDE on the host. I opened a thread a while ago on a dedicated Whonix host OS but supporting such as setup would produce quite some overhead. imho supplying an image with cryptsetup-reencrypt and limited hardware support would not be much better than doing a normal Debian installation with FDE. For better hardware support you could always use testing or a more recent kernel.

1 Like

@Algernon Thanks for your contribution of this feature. How difficult is it to add a GUI/shortcut/script to make toggling it easier and indicate its status to allow more widespread usage?

This is a huge addition as it eliminates the main advantage live systems had over us. I would love to see it become so easy a technically non-apt person like a journalist can use it.

What do we use to indicate the status?

Also how do we advertise this feature?

We could add a systray icon.

You could copy/paste/modify code from here:
GitHub - Kicksecure/sdwdate-gui: Grapical User Interface (gui), Systray Icon for for sdwdate - https://www.kicksecure.com/wiki/sdwdate-gui

An indicator showing if using live mode or not would be a lot simpler
than GitHub - Kicksecure/sdwdate-gui: Grapical User Interface (gui), Systray Icon for for sdwdate - https://www.kicksecure.com/wiki/sdwdate-gui.

Such a systray might a curious user to check its tooltip and/or to click
it, seeing “persistent mode”, explaining what it’s about and how to
enter “live mode”.

Depends on where you want to have this toggling feature. Security-wise it only makes sense on the host since you can’t write protect the VM from within itself. Some .desktop file with an associated script could then change the VM setting to use direct kernel boot with the respective kernel command line and also reimport the disk as read only. To get the kernel and initrd you would also need to first mount the disk at least read-only. Also this would only work for KVM and maybe Xen. Another option for the script would be to mount the disk rw, change the grub.cfg and umount it again. I’m not sure, however, how this could be implemented on Windows.

@Patrick
Yeah I was also looking at the code of sdwdate-gui a while back but tbh did not understand enough of it to port it to some kind of live-mode indicator. The systray icon would certainly be the most convenient option.

1 Like

+1 A systray icon is the way to go. Hovering with the mouse on it would show a “Run Whonix in live-mode” blurb. Right clicking and selecting about from the context menu could then lead to a dialog explaining this feature in more detail.

I see what you’re saying. The lack of a modifiable file would make it impossible to disable the feature if its enabled. How about designating a special (hidden file) - essentially a symlinked overriding copy of the guest grub config, to the shared folder to have changes to it take effect?

So in this case it would need shared folders to work which has the good side-effect of forcing users to enable it (shared folders) in case they want to save something from the live session later on. This would also be cross platform rather than a host solution and likely safer since I don’t know the security implications of direct boot if we are dealing with a malicious guest.

Could you please add license headers? @Algernon

Not sure if I get you right. So you want the grub.cfg to reside in the shared folder which then gets sourced during boot? The shared folder gets mounted at a late stage during the boot process and can’t change grub in any way. Also if you were somehow able to really overwrite grub.cfg in the guest (symlinking would not be enough) you would need to boot the VM, grub gets overwritten and then you need to reboot the VM again to boot into the live system. It also won’t solve the problem of getting the VM image write protected. This can only be done on the host.
Another idea would be to somehow let the initrd or maybe even grub check if the VM disk can be written to. If it can’t be written to then automatically switch to live mode. However, this would probably not work for VirtualBox since write protection works a bit different there. I need to do some research on this.

@Patrick

Going to fix this.

2 Likes

This sounds awesome if possible. Let me know if there is anything I can help research about this.

2 Likes

I think I already figured most of it out :slight_smile:
I tried it with grub. It would work for KVM and theoretically also for VirtualBox but VirtualBox has some obscure bug which makes it crash when you want to write to the grub environment and at the same time the disk is write protected. However, the upside is I also found some rather well hidden feature to set vbox VMs to read-only so that it works similiar to KVM which makes the setup a bit more convenient.
I also tested some script which checks during the initrd stage if the disk can be mounted read-write and if not proceeds with live-mode. Currently this looks promising for KVM, just need to test it for Virtualbox too.

2 Likes

The new setup could look like this:
Install the grub-live package (the script below can be later added to the package)

Create a script: /etc/initramfs-tools/scripts/init-premount/livetest
with the following content:

#!/bin/sh

set -e

case “${1}” in
prereqs)
exit 0
;;
esac

echo "Testing for live boot. "
mkdir /livetest
mount -t ext4 -n -o rw $ROOT /livetest
if [ -n “$(mount | grep “(ro,”)” ]; then
echo "Mounting root read-write failed. Assuming live-mode. "
umount /livetest
if [ -z “$(dmesg | grep “BIOS VirtualBox”)” ]; then
echo ‘live_disk=$(blkid /dev/vda1 -o value -s UUID)’ >> /scripts/local
else
echo ‘live_disk=$(blkid /dev/sda1 -o value -s UUID)’ >> /scripts/local
fi
echo “BOOT=live” >> /scripts/local
echo ‘LIVE_BOOT_CMDLINE=“root=/dev/disk/by-uuid/$live_disk boot=live ip=frommedia plainroot union=overlay”’ >> /scripts/local
else
echo "Filesystem can be mounted read-write. Proceeding normal boot. "
umount /livetest
fi

exit 0

chmod +x the script.

run:

sudo update-initramfs -uk all

Add “alias /var/lib → /rw/var/lib,” to /etc/apparmor.d/tunables/home.d/grub-live
Otherwise apparmor will complain and tor will not start.
Poweroff the machine. For KVM just toggle the virtual hard disk to read-only. For VirtualBox run:

VBoxManage setextradata VMName “VBoxInternal/Devices/lsilogicsas/0/LUN#0/AttachedDriver/Config/ReadOnly” 1

I guess the path should be the same for everyone. But to be sure you can check the VBox.log.

If you now boot the VM you don’t need to select the live mode. Just let it boot normally. During boot the script checks if the disk can be mounted read-write. If this is succesful it just boots into persistent mode as always. However, if the disk was set to read-only on the host the check will fail. It then sets some variables required for live boot, the right disk device depending on if it runs on KVM or VirtualBox and proceeds to boot as a live system.
The script would also making changes to the grub menu (how it is currently done) obsolete.

To enable read-write again for VirtualBox do:

VBoxManage setextradata VMName “VBoxInternal/Devices/lsilogicsas/0/LUN#0/AttachedDriver/Config/ReadOnly”

2 Likes

Awesome :slight_smile: Thanks for seeing this through.

So the remaining tasks:

the grub Apparmor profile we ship need to be updated, we use the updated script, we tell the user to toggle read-only to enable this feature and that’s it!

What is the consequence if there are any bugs in the script i.e. if it will exit non-zero?

Well, if mount failed, there is nothing to umount so umount would exit non-zero causing the script to fail since set -e is being set?

Did some tests.
If mount fails then the rest of the script wont get executed. It just proceeds normal boot, in case of a write protected root disk it will start to complain at some point since the right variables are not set.
If for some reason mount succeeds but umount fails then this is no issue for live boot but in case of normal boot it will drop to an initramfs shell. Unmounting the disk manually and exiting the shell resulted in a panic, however. I guess it is due to some other scripts which ran after the live script. When I add some check to the script to see if umount worked and if not immediately drop to a shell then manually umounting and exiting the shell works. Booting then also works without a panic.
But I can’t really come up with any scenario where mount succeeds but umount does not. There is not much happening inbetween. It would also be rather strange if you can manually umount it but the script can’t. In any case there would need to be something wrong with the disk or system in general. I can still add it to the script to exclude this case.

The first umount seems unnecessary and would as I expect make the script fail. Didn’t actually test but everything I know about shells scripting tells me it would make the script exit non-zero since set- e is being set

I mean, first you try to mount. Then if [ -n “$(mount | grep “(ro,”)” ]; then tells you, that mount failed. Why try to umount then?

This would only make sense if mount -t ext4 -n -o rw $ROOT /livetest falls back to mount read-only if read-write (rw) mounting failed?


set -e
mount -t ext4 -n -o rw $ROOT /livetest

If mount rw failed, why doesn’t the script stop there since set -e is being set?