Whonix live mode


Is there any interest in running whonix in live mode?
I came across this (https://www.raspberrypi.org/forums/viewtopic.php?f=63&t=161416) a while ago and tested it on a debian vm.
It only requires a small patch to the initramfs and minor changes in the grub config.
With the setup you could run whonix always as a live system. Writes would go to RAM by default. But e.g. by choosing another boot option you could boot into an “update-mode” where you install updates, new software or make other persistent changes. After that you switch back to the live mode.
In that way you could also have a persistent entry guard. Or boot multiple instances of whonix from the same image. Of course you could also always boot into persistent mode if you choose to do so.
To be really sure that nothing is written to disk in live mode you should make the disk immutable or read only via the vm software.
At the moment overlayfs is used for this but I hope this could also be implemented with dracut + device mapper (going to test it soon). Overlayfs has some issues with MAC and when editing large files which requires a lot of RAM depending on the file size.
Something like this could also be interesting for a linux based host os with maybe a hardware write protection switch.


Sure, that’s super interesting! Has been asked many times here. There is just a shortage of developer time. If you would like to step up to maintain this functionality, you’re most welcome to!


Thats an incredible feature to have. Might act as a second line of defense against forensics if the FDE is bypassed. Its also the best of both worlds selective amnesia while allowing users to backup files they wan to keep via shared folders

Knoppix has/had a toram boot option that did all that. @Algernon can you find a similar feature or package that can make Debian do this?


Seems that Debian has such a tool. debirf:

build a kernel and initrd to run Debian from RAM

debirf (DEBian on Initial Ram Filesystem) is a set of tools designed to create and prepare a kernel and initial ram filesystem that can run a full-blown Debian environment entirely from RAM.

The kernel and initramfs pair created by debirf can be used for a myriad of purposes, from quick-and-easy system repair to diskless thin clients. The kernel and initrd can be placed in your system boot partition, burnt to read-only media, or supplied by a netboot server.

The debirf tools use a module architecture which allows you to customize debirf for any possible purpose by specifying what components are included in the generated image.




Afaik most of debian live cds like Tails, Knoppix … support running from RAM if you append “toram” to the boot options. There is also some kind of persistence possible but not in the way as described here since the iso files have a squashed filesystem which is always read only.
I haven’t heard of debirf before but I don’t see any advantages over the “standard” debian live mode. Instead of an additional filesystem.squashfs you have everything in the initrd. Instead of overlayfs +tmpfs only tmpfs seems to be used.

In case anyone wants to test the selective amnesia with dracut:

I started with the whonix 14 workstation kvm image.
Then did:
sudo apt-get update
sudo apt-get install dracut

Dracut will be installed and initramfs-tools will be removed. Don’t do “apt-get autoremove” afterwards or you will remove half of whonix. A lot of whonix packages seem to depend on initramfs-tools, so that would need to be fixed.
Make /etc/dracut.conf.d/10-debian.conf look like:

add_dracutmodules+=" dmsquash-live "

edit (/usr)/lib/dracut/modules.d/90dmsquash-live/dmsquash-live-root.sh:

change overlay_size=512 to overlay_size=32768

after line 266 add:

BASE_LOOPDEV=$( losetup -f )
umount /run/initramfs/live
losetup -r $BASE_LOOPDEV $livedev

Save file. Then do:

sudo dracut

A new initramfs will be generated under /boot/initramfs-4.9.0-1-amd64.img
Now either change grub manually at next boot or to make it permanent edit
Copy and paste the first menuentry, keep the old one. Edit the first entry (rename it to live mode or something).
Change “root=UUID=…” to "root=live:UUID=…"
change “initrd /boot/initrd.img-4.9.0-1-amd64” to “initrd /boot/initramfs-4.9.0-1-amd64.img”

Save the file and shutdown the vm. Make the hd read only and boot the vm. It should automatically boot in live mode. During booting it likes to drop to emergency mode but just press ctrl +d and it will boot to the desktop. This does not happen always but i don’t know if it is related to whonix or due to the development version. Other debian stretch vm’s boot normally.
You can create files etc which should be gone after a reboot. Be aware, currently, if you delete files you don’t get the RAM back. Hopefully this can be fixed in future. So if you download large files you might run out of RAM and the system will become read only or crash even if you delete large files from time to time.
When you reboot and disable the write protection and choose the normal whonix boot mode you can install updates and create persistent files as always.


Dependency on initramfs-tools is declared here.

I wonder if we could remove it? VirtualBox guest additions probably needs update-initramfs (dunno if it works with dracut equivalent if ther is any? Might be better to make initramfs-tools a weak recommended package. (Meaning we apt-get install it during Whonix build, but don’t have some Whonix meta package depend on it.)


To prevent the autoremoval…

sudo aptitude keep-all

(As per https://www.whonix.org/wiki/Whonix_Debian_Packages.)

There is one issue here…? The VM images being amnesic does not help if the host is not amnesic, right? So instead of doing this inside Whonix VMs, it should be done on the host operating system?

What’s next here? Do you wish to research / test this more? Or jump right into documenting this in the wiki and maintaining this?


(page name can be changed of course.)


What about ram wiping tools or secure-delete that can wipe files in RAM?


Why is this needed? I thought live mode would ensure an in-RAM boot?

All hypervisors have an immutable disk mode where changes are redirected to an ephemeral snapshot that is discarded on shutdown but I assumed that live mode wouldn’t touch the disk in the first place…

That is the big question.


Did some more research on encrypting VMs.

Modern OSs zero RAM pages once they are no longer used by a program. Disabling swap and hibernation ensures that no data is leaked to disk. In theory a luks volume containing VM images can protect the disk images on-disk with a couple of simple steps ensure that no data leakage occurs once a VM is shut down. For efficiency, the kernel tries to give back memory to the same process that the page belonged to and blanks otherwise. While the idea is good as a second line of defense it leaves users open to rubberhose cryptanalysis. Live modes are the answer but they require some configuring. If live mode is ever automated/simplified it would be a killer feature for plausible deniability.


I’m also interested in this, but preferably as a live host.

@HulaHoop I think loading everything to RAM would require to much resources (I guess at least 10 GB for all files and running processes), but might be nice as an optional feature for users with enough RAM to increase performance.
I’d suggest using bilibop (manpage), which is AFAIK also used by Tails and turns the system to read-only mode (if it was installed on writable media) and stores just written files in RAM.
Maybe we could also just use Tails as a base and modify it a bit to prevent tor-over-tor instead of reinventing everything,

We might be able to fit everything on a dual-layer DVD (8.5 GB) if we use a backing image for common files in both VMs. Persistence (optional) could be achieved by storing a snapshot on writable encrypted media:

[base] ------- [  gateway  ] ------- [ volatile gw ]
       \                     \
        \                     \----- [persistent gw]
          \--- [workstation] ------- [ volatile ws ]
                              \----- [persistent ws]


Depends on if and what the vm writes to the host.
I don’t know if there is a official definition regarding amnesia for live systems. Imho, the most important part is that the base system remains untouched and writes go to RAM. More or less each live iso does that. The other part is wiping of RAM when the machine gets shut down (afaik only implemented by tails).
I’m thinking mostly about persistent malware and to a smaller extend about reading out RAM.
I’m certain tails implemented the RAM wipe for a reason, but I still consider cold boot attacks rather unlikely. I did not yet see such a attack in the wild and if someone gets hold of your machine while it’s running he can read out RAM anyways. But I guess a RAM wipe could also be implemented.
If, as HulaHoop mentions, RAM is zeroed when not in use anymore then it also would not matter. More research on this is required.
Of course if you want complete amnesia then the live mode should be implemented on the host. I don’t see a reason why this would not work on e.g. a qubes or a non-qubes host. Qubes already uses dracut and 3.2 also have a live iso. Some hardware with correctly implemented write protection switch would be needed if you want to be really sure that nothing gets written to the disk.

I will certainly take a closer look at this and do some testing + wiki editing. I’m confident that the live-mode can be implemented.

What you quoted is not directly related to that but could also be implemented. Dracut uses device mapper snapshots (or, if you want, overlayfs in newer versions). It is not possible to get memory back once some file was written there even if you delete the file. I does not work like tmpfs.
If you want to test it just boot the live vm with 1-2 GB RAM and then create a large file, say 500 MB. Delete it, create it again, delete it … . At some point the snapshot will overflow and the system will crash or become read only.
I was under the impression that thin snapshots would be the solution but either I was wrong or there are some bugs.
Using overlayfs would be a solution too but only when used for the virtual machines. On the host you would require large amount of RAM since overlayfs copies everything to RAM when it does some changes i.e. also big vm disk images. I’m also not sure how mature overlayfs in dracut is at the moment. It is also not in the dracut version for stretch.

Live mode and in-RAM boot are not the same. Live mode is basically not making writes to the underlying system. In-RAM boot = live mode + copying the whole filesystem to RAM. Copying to RAM takes a while depending on what you boot from but after it is finished the system will be really fast. You can then also remove the usb stick, disc, dvd and it will continue to run. You can also have this with dracut, just append rd.live.ram. I did not test it with the live mode approach described here since RAM requirements would be even bigger (we don’t boot from a compressed filesystem as e.g. a normal live iso).
A live iso for the whonix vms would also be possible, I did this before with the normal debian live stuff but this would also work with dracut. It is not that easy to update an iso, however

Sort of true. Ext3 and ext4 have e.g. journaling therefore this applies: https://github.com/msuhanov/Linux-write-blocker
When you use the system in the way as described on the rpi forum you will see that the hash sum of the vm images changes except if you use ext2. This did not happen to me with dracut + device mapper even with ext3 or ext4 under normal usage but after a VM crash the value was also different. If you make the vm read only via the vm software this should not happen and also did not happen in the many cases I tested it.
Also some malware could mount the disk rw when there is no write protection via the vm software. So I think you need to break out of the vm to make the image rw.


I was also thinking about a (switchable) live host os but this requires some broader discussion (going to start a thread …) e.g. what OS to use, some kind of installation tutorial or just a usb image … . The latter one would probably be the easiest for novice end users. Also qubes is of course an option.
Quite a while back I tested some whonix ws + gw on a debian host iso. Everything fit on a normal DVD.

In the end, the question is what kind of live system is desired (iso or uncompressed/plain filesystem). Imho, the most useful option would the system described here (so no iso) since you can have FDE + updates. You can have FDE for iso based systems too but it is not that useful. Hardware write protection for the host os is the only bigger issue I see at the moment.


You can avoid that at least for the VM images if you use backing images (as shown in my post above) because the large base VM images wont be changed, only changed made within the VM would be written to the volatile VM snapshot in RAM.

I think Debian (or a derivative) would be a good choice to be consistent with the rest of Whonix. Qubes would be great security wise and they offer already a live USB (though alpha and currently unsupported / unmaintained) but usability wise maybe not so, it might not work everywhere because of hardware requirements.

I guess it depends on the use case.

Read-only media like DVD would be the best in terms of write protection and wouldn’t even need FDE for the base system. But updating would be a bit painful: either store updates on encrypted writable media and install them when the system starts, similar to how Tails handles additional software or burn a new DVD every time you change your system, which may be required for certain updates.
Another disadvantage is that many computers have only one DVD drive (some even have none), so downloading and burning an (updated) image might not be possible without additional hardware (external DVD drive) and since DVDs are moving they may make too much noise if you have to be stealthy.

USB flash drives would be better usability wise: you could boot in a writable mode to make any updates/changes and reboot back in read-only mode once you’re done, some USB drives even have a physical write protection switch. FDE would be required also for the base system to prevent others with physical access from tampering with the USB drive or reading the change you made to it, so the initial setup would be a bit harder for the user and require either a tutorial or an installer to setup FDE.

How did you manage to fit the VM images and a host system on a DVD? I guess you used some compression, but then how did you work with compressed VM images?


I’m not 100% sure about overlayfs in this case. When I tested I think I also had the vm hd write protected but it still was copied to RAM. I have to recheck that. Still, if you use one vm hard disk as backing image you could of course spare some RAM (or hd/ DVD space) since you don’t need one image per vm.

You can get around that (no noise and slow file system) if you use “toram” or “rd.live.ram” for dracut. Depending on the iso size you of course need a lot of RAM but then you can unplug the drive or download a new iso (needs even more RAM …) and burn it to the DVD.

You can either make iso files from the vm images which requires some scripting, installing debian live tools etc. Or you make a debian system and copy the vm images (or iso files) to it. The images are fairly small ~1-2GB. Then you make an iso from this debian system. Use xz compression for creating the squashfs images. The qcow2 files can also be compressed to some extend with virt-sparsify but it is not as good as xz compression. Since I tested it quite a while back the whonix images might be larger now and so the created iso file(s). But in general a standard debian iso with lots of software for general usability, networking, browser and vm software is not much bigger than 1 GB (see the tails iso).


Do you want to do this for Non-Qubes-Whonix or Qubes-Whonix or both?

Yes, there are various goals possible here.

  • full Live Mode, a host operating system Whonix booting from Live USB and/or Live DVD (nothing is written to the disk except when using selective persistence)
  • just Whonix VMs booting without persistent write access. This would be similar to a Qubes AppVM. Sometimes I apt-get install packages in a Qubes AppVM well knowing that these will be lost after reboot. Still useful when just learning / trying stuff.

I like the idea about USB with physical write switch where you could sometimes boot in persistent mode to apply upgrades or customizations.

It’s your project. Whatever you wish to develop/maintain.

Maybe. Last time I looked into the Tails build process over a year ago it uses a binary vagrant basebox. And the vargrant basebox build from source was broken. I didn’t manage to build Tails fully only from source code.

Sure, feel free to look into it. They have a lot useful stuff such as RAM wipe on shutdown, selective persistence, unsafe browser for hotspot registration, and whatnot. Did you suggest to run the Whonix VirtualBox images on top there? That would be interesting.


I intended to use KVM, but yes. Basically a volatile Whonix VM on top of Tails with optional persistence and maybe corridor or something similar to modify Tails’ network filter so that the VM can only connect to the Tor network (Tor over Tor doesn’t seem to be the issue here).



I retested it with iso, raw and qcow2 files. With overlayfs it should also work as long as the vm images are set to read-only (isos are ro anyways). With rw you will run out of RAM. It is also important that the right permissions are set for iso or vm image files otherwise virt-manager tries to change that meaning it copies the whole iso/image to RAM …
So either set them manually (in persistent mode) or import the image/create the vm and then restart into live mode. Only drawback for overlayfs would still be MAC but this might not matter for everyone.

For Qubes-Whonix there are already disposable VMs though they are a little bit different. With non-qubes-whonix it already works in principle but I did not do a lot of testing. For other debian based VMs it works fine for me, generally. Imho, it also doesn’t look like a feature which requires major changes to Whonix and a lot of maintenance, but I could be wrong :D. I don’t know if there are relevant differences to qubes-whonix. I guess as long as the boot process is the same it should also work.
I probably would go from non-qubes-whonix-kvm to non-qubes-whonix-vbox to qubes-whonix and then maybe a whonix-host-os or qubes in general.


Maintenance and work required: Depends. If it’s instructions only, then it’s not that much work. If it’s downloadable iso image, that’s more work to maintain. And more work if features such as ram wipe on shutdown are added

Qubes DisposableVMs are not amnesic yet. Reference:

Qubes could be different, because it’s based on Xen and Fedora.


I would at first go just for the VM images (qcow2 or vbox). So there is no iso file to create and no special setup. I guess most of the changes to be made are already in my second post in this thread. Sure, there needs to be some testing if initramfs-tools can be removed without problems e.g. for virtualbox guest additions, changing dependencies …
I need to think some more about the RAM wipe features
Imho iso files don’t make sense for the VMs, maybe for a host OS but a hd image might be better there (see other thread).
Since Qubes is based on Fedora it already uses dracut, the Qubes (and Fedora) Live isos use also dracut + device mapper. Not sure about the VMs, going to test it.


Related to VM image encryption and amnesia:



Is this still on?