Whonix host operating system

No problem, not a priority.

Yes, so we’d need them as deb plus at the same time unpacked. Space issues.
(For installed systems the unpacking of debs / set up of VM import could be done with a first boot script rather than during boot - if that gives us useful development options.)
(qcow2 shipped by deb packages is just a thought, a vehicle to simplify development.)

I guess not. These more pretty GUI installers are “only” frontends. The backend still is Debian Installer.

I used debian stretch as host, but yes as I said it should go below that.

Might give it a try again once I have some spare time…

1 Like

Sure, my intention was to use the remasterys “installer” not the build scripts for the iso. So you could maybe have both a usable live system and some kind of installer at the same time.
The host would still be created via the Whonix scripts, so no problems with logs etc.


Suggestion for a new name:

"Whonix Desktop"

Because I think it’s important that we keep the already well-known Whonix name + it conveys the idea of a full Desktop (i.e., physical) environment while keeping it simple.

What do you think?

Doesn’t sound very exciting. Similar to Whonix Host. More reasons here: Whonix host operating system - #19 by Patrick

1 Like

@HulaHoop, Re:

Continuing that in this thread, we could also differentiate further between the Whonix VM’s XFCE and the host OS XFCE by not having Whisker Menu on host. So between that, a differing background image, and a contrasting enough theme, it’ll both be different enough and the host XFCE will practically look like a skin of modern GNOME.

@Patrick For ongoing discussion of developing an amnesic host OS (not the VM OS), is this the ideal thread (and not ‘Whonix live mode / amnesia / amnesic / non-persistent / anti-forensics’)?

Guest OS amnesia I suppose is a thing (and it’s interesting even though I don’t have a common need for it myself), but as I hope we’re all aware by now, amnesia doesn’t translate to anti-forensics if the host OS has copious data about your VirtualBox inside it, or a swap partition, etc.

So when we’re discussing our effort to make an anti-forensic Whonix, I assume this ‘host’ thread is the one to discuss it in?

1 Like

I wanted to report on the progress I did with the bootable live Whonix iso project.

1. Bootable live Debian 10 BIOS/UEFI ISO with Whonix KVM

In short, it works fine! I have now a 2.8GB iso file which can be burnt on a USB disk and will boot from BIOS or UEFI to a full live debian 10 desktop with KVM-virt-manager.

In details
I first created a standard XFCE4 debian 10 vm with grml-debootstrap with the required kvm/qemu/virt-manager packages + Whonix qcow2 files. I did not use the Whonix hardened-debian build, and thus my “Whonix-Host” has nothing Whonix-specific, but I don’t see any reason why it wouldn’t work with the hardened-debian version.

I did not manage to configure the Whonix VMs in chroot, so I had to boot the host VM and configure them by hand. Very unclean, but I am sure there should be documented information on how to do this in a clean, scripted way. This master host VM is in no way optimized as it is, and its size may even be further reduced as I didn’t take time to careful review the packages I put into it (although it was quite a minimal build).

Important notice: I had to copy the qcow2 files into the master host VM with qemu-img convert -O qcow2 command (which shrinks the VMs to their “real” size) instead of cp --sparse=always command , otherwise the live-system would be unable to start them, complaining about “no space left on the device”. Maybe when they are not shrunk, the live-system “thinks” that they are 100GB big and is unable to allocate enough space?

When the master host vm was up and running, I made a bootable BIOS/UEFI ISO file out of it with the bash script that I posted above.

Everything works fine now. I had much less success with the second, installer part of the project.

2. “Whonix-Desktop” installer
This is still very early stage to me. I did everything “by hand” in KVM to just try things out.

I attached a 20GB virtual disk that I divided into two partitions: first a 500M boot partition, and then I encrypted the rest (LVM on LUKS, basically following the Arch wiki instructions).

After that, I mounted the encrypted partition to /mnt, the first partition to /mnt/boot and proceeded to rsync the live’system on the encrypted partition with:

rsync -aAXv --exclude={"/dev/","/proc/","/sys/","/tmp/","/run/","/mnt/","/media/*","/lost+found","/var/log/","/lib/live","/usr/lib/live","/var/tmp"} * /mnt/

After that, things started to get complicated. Of course, to be bootable, a lot of adjustments need to be made to the new system, such as installing grub, installing the kernel, changing the disks UUID, making sure the kernel will load the required modules to deal with encryption, rebuilding the initframs (update-initramfs -u didn’t work in the live environment).

I did try some adjustments, but haven’t got to the point of having a bootable encrypted disk as of now. Didn’t spend to much time on it either, but again I am sure all of this pretty much documented and should be even able to be scripted somehow.


Part 1: bootable live Whonix Desktop

  • Mostly done, proof of concept works
  • Need to try with a hardened-whonix build
  • Need to script all the build in an automatic way
  • Need to decide what exact package would ship in the Whonix Desktop (probably need some non-free firmware to make it work with most hardware, wifi support, etc.)

Part 2: installer

  • As of now, I have no working solution
  • The “DD” way seems the fastest - but needs careful tailoring
  • Ideally, the final installer should be some kind of simplified GUI, maybe test with Calamares?
  • All in all, shouldn’t be to difficult to achieve with the right level of skills and time, nothing that hasn’t be done before

A different background image on the host and different theme to differentiate between Whonix host and Whonix VMs is a good idea. Not convinced yet that Whisker Menu has to be gone on the host though.

Not sure. First the host operating system needs to become reality before implementing amnesia.

A lot discussion on amnesia happened since this very post:
Whonix live mode / amnesia / amnesic / non-persistent / anti-forensics - #121 by Patrick

As per above analysis appears to be done for now and development tasks are created.



Possibly some daemon required running? @HulaHoop
If you share any error messages, perhaps we can suggest what commands to run to sort them out.

Perhaps it is this one
Whonix ™ for KVM.

Unable to connect to libvirt.

Obviously the advice to users to manually reboot then won’t be great for a build. Perhaps systemctl start libvirtd at the start and systemctl stop libvirtd before existing the chroot would do?

This this is has likelihood of controversy, possibility to distract this thread, I created a separate thread to redirect that discussion.

Never tested by me but looks very promising! :slight_smile:


Thanks for your feedback.
I will try to rebuild a host with the hardened-debian as a base for the master host. Building as of now on debian buster with

sudo ./whonix_build --flavor hardened-debian-xfce --target qcow2 --build

Correct? Anything specific to take into account while building it?

I’ll share the error messages once I reach this stage again with the hardened-debian VM.


1 Like


minor: [Help Welcome] KVM Development - staying the course - #282 by Patrick

--redistribute if you like to enable Whonix repository by default. Perhaps --redistribute was not a great name for the parameter?

(background: I plan on making Whonix easier to fork. Ideally Whonix would be come $project_name which can be configured by build parameter so anyone can easily create a ForkNix-Workstation (some new name chosen by forker) rather than Whonix-Workstation with most name strings generic/switched in the Whonix source code. This this is a major refactoring I’ll wait until the dust for Whonix host has settled, there is no rush.)

I did not test hardened-debian-xfce for some time. Build might even still be functional. hardened-debian-xfce still is underdeveloped with some “minor small edgy things” (lacks package usability-misc by default) but should be good enough for development.

OK, maybe not that important at this stage.
Build breaks, can I safely ignore it (seems to be related to virsh xml settings, don’t need it as I am building a host VM)?

EDIT: ignored it, building continues… :slight_smile:

1 Like

Update Report: building host with hardened-debian-xfce

Building with

sudo ./whonix_build --flavor hardened-debian-xfce --target qcow2 --build

went successful. On first boot, I realized there was not /etc/apt/sources.list file, so I created one with debian buster repos myself. Probably caused by not adding the --redistribute flag during build?

Anyway, after successful build, I did a quick and dirty bash script which mounts the hardened-debian raw and installs the following packages:

qemu-kvm libvirt-daemon-system libvirt-clients virt-manager

Then, and while still in chroot and a (dirty) scripted way, I configured the Whonix network and VM following official documentation

chroot $HARDENED_CHROOT/chroot addgroup user libvirt
chroot $HARDENED_CHROOT/chroot addgroup user kvm
cp *.xml $HARDENED_CHROOT/chroot/tmp/
chroot $HARDENED_CHROOT/chroot service libvirtd restart
chroot $HARDENED_CHROOT/chroot virsh -c qemu:///system net-autostart default
chroot $HARDENED_CHROOT/chroot virsh -c qemu:///system net-start default
chroot $HARDENED_CHROOT/chroot virsh -c qemu:///system net-define tmp/Whonix_external_network-
chroot $HARDENED_CHROOT/chroot virsh -c qemu:///system net-define tmp/Whonix_internal_network-
chroot $HARDENED_CHROOT/chroot virsh -c qemu:///system net-autostart external
chroot $HARDENED_CHROOT/chroot virsh -c qemu:///system net-start external
chroot $HARDENED_CHROOT/chroot virsh -c qemu:///system net-autostart internal
chroot $HARDENED_CHROOT/chroot virsh -c qemu:///system net-start internal
chroot $HARDENED_CHROOT/chroot virsh -c qemu:///system define tmp/Whonix-Gateway-XFCE-
chroot $HARDENED_CHROOT/chroot virsh -c qemu:///system define tmp/Whonix-Workstation-XFCE-

You were right, I was able to configure the network using the .xml files after running service libvirtd restart in chroot (see above).

However, configuration of VM does not work:

error: Failed to define domain from tmp/Whonix-Workstation-XFCE-
error: invalid argument: could not find capabilities for domaintype=kvm 

The thing is, that I am running all the build and above commands inside a debian buster VM. Maybe this causes the could not find capabilities for domaintype=kvm error? Is there any known workaround to force the domain definition?

EDIT: I mounted the .raw VM file directly on my host and it did not work either. Somehow the chroot environment does not “believe” that it has kvm capabilities, although the host pseudo-filesystems are binded to it:

mount --bind /dev chroot/dev
mount --bind /proc chroot/proc
mount --bind /dev/pts chroot/dev/pts

Currently looking online for a solution, nothing found yet.

1 Like

Most likely.

OK, I have searched and searched and I have found no solution to the error: invalid argument: could not find capabilities for domaintype=kvm error.

So I have resorted to a dirty trick: I changed the ‘kvm’ flag to ‘qemu’ in the Whonix-Gateway.xml file (so the domain can be defined in chroot), and then I just replace ‘qemu’ by ‘kvm’ back directly in the /etc/libvirt/qemu/Whonix-Gateway file of the host vm:

chroot $HARDENED_CHROOT/chroot virsh -c qemu:///system define tmp/Whonix-Gateway-XFCE-

sed -i "8 s/^.*$/<domain type='kvm'>/" $HARDENED_CHROOT/chroot/etc/libvirt/qemu/Whonix-VMs .xml

Seems to work: on first boot, the host is able to boot the Whonix-Gateway with KVM hypervisor.

Would such a dirty workaround be acceptable? I see no other solution at the moment.

1 Like

+1 keep whisker on Host.

My good guess is that KVM needs a little work to mount stuff in chroot because kernel modules are involved.

See if this post on building KVM VMs outside a virtual environment helps. They use debootstrap:

1 Like

Well, you are probably right, but at this stage the VM is already built (hardened debian build) but just refuses to define a kvm domain, even after binding the pseudo-filesystems.

Anyway, as my workaround did the trick, I built a new ISO, after adding some non-free firmware packages and network-manager-gnome (necessary to have the network applet in the panel that allows user to connect to a wifi graphically).

ISO size is around 2.4 GB


Indeed, iso file size is 2.0 GB with -comp xz!

Works fine, wifi and graphic support out of the box. BIOS and UEFI boot as expected, Whonix VMs boot and work normally

The way I understand it, live boot makes it so that all modified files are copied into RAM (overlay). This also applies to the VM images, which are copied into the RAM before being able to start.

-> which explains why the original 100 GB .qcow2 files were unbootable (“no space left on the device”).
-> consequence: Whonix Desktop as a live system needs a lot of RAM in order to run the Whonix VMs (min 8GB I would say). There may be room for fine-tuning (by default the tmpfs takes 50% of the available RAM, i.e. 8GB = 4GB RAM left, 4GB for the overlay tmpfs), see overlay-size option in man live-boot:

       The  size  of  the  tmpfs mount (used for the upperdir union root mount) in bytes, and
       rounded up to entire pages. This option accepts a suffix % to limit  the  instance  to
       that percentage of your physical RAM or a suffix k, m or g for Ki, Mi, Gi (binary kilo
       (kibi), binary mega (mebi) and binary giga (gibi)). By default, 50% of  available  RAM
       will be used.

Below a few screenshots taken from the Whonix-Hardened-Debian-Host in live-boot mode, running the Whonix VMs in virt-manager, showing UEFI and WIFI support.


With qemu-img convert -O qcow2 we have the following results:

-rw-r–r-- 1 root root 2.1G May 1 12:06 Whonix-Gateway.qcow2
-rw-r–r-- 1 root root 2.6G May 1 12:08 Whonix-Workstation.qcow2

I did some test in virt-manager, first with 8GB RAM:

Just after the ISO boot, before starting the virtual machines:

user@host:~$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            3.9G     0  3.9G   0% /dev
tmpfs           797M   17M  780M   3% /run
/dev/sr0        2.3G  2.3G     0 100% /run/live/medium
/dev/loop0      2.1G  2.1G     0 100% /run/live/rootfs/filesystem.squashfs
tmpfs           3.9G  6.4M  3.9G   1% /run/live/overlay
overlay         3.9G  6.4M  3.9G   1% /
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           5.0M  4.0K  5.0M   1% /run/lock
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs           3.9G   12K  3.9G   1% /tmp
none             10M  4.0K   10M   1% /run/msgcollector
tmpfs           797M  8.0K  797M   1% /run/user/1000

user@host:~$ free -h
              total        used        free      shared  buff/cache   available
Mem:          7.8Gi       375Mi       6.8Gi        32Mi       585Mi       7.2Gi
Swap:            0B          0B          0B

After booting the VMs with virt-manager (default settings, 512 MB RAM to the Gateway, 2GB RAM to the Worsktation):

I failed to boot both VM due a lack of free space on the overlay tmpfs, so just the Gateway running:

user@host:~$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            3.9G     0  3.9G   0% /dev
tmpfs           797M   17M  780M   3% /run
/dev/sr0        2.3G  2.3G     0 100% /run/live/medium
/dev/loop0      2.1G  2.1G     0 100% /run/live/rootfs/filesystem.squashfs
tmpfs           3.9G  2.1G  1.9G  54% /run/live/overlay
overlay         3.9G  2.1G  1.9G  54% /
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           5.0M  4.0K  5.0M   1% /run/lock
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs           3.9G   12K  3.9G   1% /tmp
none             10M  4.0K   10M   1% /run/msgcollector
tmpfs           797M   12K  797M   1% /run/user/1000

user@host:~$ free -h
total used free shared buff/cache available
Mem: 7.8Gi 1.1Gi 1.6Gi 2.1Gi 5.1Gi 4.3Gi
Swap: 0B 0B 0B

I did another try with 8GB RAM and with overlay-size=6g appended to the boot command line.

This time, I was able to boot both VMs, but of course I had much less RAM available:

user@host:~$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            3.9G     0  3.9G   0% /dev
tmpfs           797M   18M  780M   3% /run
/dev/sr0        2.3G  2.3G     0 100% /run/live/medium
/dev/loop0      2.1G  2.1G     0 100% /run/live/rootfs/filesystem.squashfs
tmpfs           6.0G  4.7G  1.4G  77% /run/live/overlay
overlay         6.0G  4.7G  1.4G  77% /
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           5.0M  4.0K  5.0M   1% /run/lock
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs           3.9G   12K  3.9G   1% /tmp
none             10M  4.0K   10M   1% /run/msgcollector
tmpfs           797M   12K  797M   1% /run/user/1000

user@host:~$ free -h
              total        used        free      shared  buff/cache   available
Mem:          7.8Gi       2.4Gi       229Mi       4.9Gi       5.2Gi       318Mi
Swap:            0B          0B          0B

Eventually, it freezes completely when I open a few tabs in the Whonix-Workstation…

In short, would only work well with a few more GB of RAM, maybe 12 to 16 GB…


Actually I thought it was changes done to the underlying files that were redirected to RAM rather than putting absolutely everything.

I had the impression from reading the Ubuntu wiki that the kernel non-persistent option was what loaded the entire live USB into RAM and therefore allowed a user to disconnect the stick and continue working while the alternative ways of doing live booting didn’t do that and saved only the diffs in memory.

1 Like

You are probably right, but the VM files present themselves as single files to the OS and thus I can’t see how they could be modified without copying them entirely into RAM?

1 Like