Qubes + Whonix

Here’s another tough challenge with Whonix VM images that I haven’t found a workaround for yet. Maybe someone here can offer a solution?

The Whonix VM disk images are allocated 100GB of space.

I’ve tried a number of ways to extract and convert Whonix VM images into RAW formatted .img files for Qubes.

tar -xvf Whonix-Gateway.qcow2.xz

qemu-img convert Whonix.qcow2 -O raw Whonix.img

The problem is that the bootable raw .img file ends up being ~100GB file. And it takes a LONG time (~1.5 hrs) to transfer between Qubes VMs to dom0 as is needed.

Every other way I do it ultimately fails to boot, which might be related to what Patrick mentioned…

Using qemu-img to resize it, does resize it, but it then fails to boot.

qemu-img resize Whonix.img 5G

Would the only way to put this down to a smaller disk allocation be to build Whonix from scratch?

If not a big deal, could a temporary testers-only release have a small disk allocation, like maybe ~ 10GB?

Thanks!

Does Qubes OS support qcow2 images? Can you ask them please? If not, post a feature request against Qubes OS?

Then we wouldn’t have need to also supply official raw images. Also qcow2 images are more feature rich.

The Whonix VM disk images are allocated 100GB of space.
No. Those are sparse files.

For explanation, please see this chapter:

It contains commands on how to handle them without losing sparse feature.

As well as:

I've tried a number of ways to extract and convert Whonix VM images into RAW formatted .img files for Qubes.
Maybe try to make them sparse files. Would solve this whole issue.

Also maybe try this search terms:

  • site:qubes-os.org sparse
  • raw image sparse file
The problem is that the bootable raw .img file ends up being ~100GB file. And it takes a LONG time (~1.5 hrs) to transfer between Qubes VMs to dom0 as is needed.
Would be solved with sparse files.
Every other way I do it ultimately fails to boot, which might be related to what Patrick mentioned...
Not sure if this is related.
Using qemu-img to resize it, does resize it, but it then fails to boot.

qemu-img resize Whonix.img 5G


Not required when using sparse files.

Would the only way to put this down to a smaller disk allocation be to build Whonix from scratch?
VMSIZE is supported as build option but I hope and guess it won't be required.
If not a big deal, could a temporary testers-only release have a small disk allocation, like maybe ~ 10GB?
I hope and guess we can prevent this by using sparse files. I would wonder a lot if Qubes OS doesn't support it. I would also wonder if Qubes OS supported only raw images, no qcow2.

[quote=“Patrick, post:42, topic:374”]Does Qubes OS support qcow2 images? Can you ask them please? If not, post a feature request against Qubes OS?

Then we wouldn’t have need to also supply official raw images. Also qcow2 images are more feature rich.[/quote]

I tested the Whonix 8.2 .qcow2 images in Qubes and they did not work for me.

  1. Tested with “qvm-create --root-move-from” command. This changes the .qcow2 image filename to “root.img” for the Qubes HVM. But as the machine starts it reports that there is no bootable device.

  2. Tested by manually copying the .qcow2 image into a Qubes AppVM directory as “root.qcow2” and then manually edited the VM’s .conf file to point to “root.qcow2” instead of “root.img” for the “xvda” device. But as the machine starts it reports that the “root.img” file is missing and cannot boot.

I will ask the Qubes team to be sure and also put in a feature request to support .qcow2 images.

And thanks for the tips on working with the images as sparse files. I will try to utilize this sparse attribute for keeping the image sizes down when working with them in Qubes.

I’m currently working on getting networking configured and working between Whonix 8.2 VM images in Qubes HVMs.

The mechanisms by which I am tweaking the networking are:

Qubes utilizes 10.137.2.X ip addresses.

Question:

Other than in the “/etc/network/interfaces” file, does Whonix hardcode or rely upon the 192.168.0.10 / 192.168.0.11 ip addresses? Could I simply change them here to 10.137.2.X based addresses and expect Whonix to work?

Good news is, they’re not hard coded in the strong sense. Internal LAN IP addresses are more or less arbitrarily chosen and there is no inherit dependency on 192.168[…].

We changed IP’s between Whonix 8 and Whonix 9. Perhaps diff the files /etc/network/interfaces7 [/etc/network/interfaces.whonix] and see what has changed. [As per dev discussion Whonix Forum]

Whonix 8:
https://github.com/Whonix/Whonix/blob/Whonix8/whonix_gateway/etc/network/interfaces.whonix

Whonix 9
https://github.com/Whonix/whonix-gw-network-conf/blob/master/etc/network/interfaces.whonix

Unfortunately they are hard coded in non-strict sense in various files. Which means, if you change them in one place, you must change them in all places. [And due to the nature of config files, there is no way to use variables.]

To find out in which you need to make changes.

  1. checkout the git tag of the Whonix version you are using such as 8.2
grep -r 192.168 *

And if you’re unsure, compare those files with Whonix 9. IP’s were changed there as well.

Patrick, I put in a feature request for qcow2 images over at the Qubes developer mailing list.

Joanna just left a reply here…

https://groups.google.com/d/topic/qubes-devel/zHrCA8Zp7No

They don’t want to add the extra features of qcow2 images in dom0 to the TCB, compared to raw.

She pointed out a potential way to boot from qcow2 images in other less trusted non-dom0 VMs. However, at first glance, this seems a bit too messy for an officially packaged Qubes + Whonix solution.

She was quite happy to hear that there is consideration of Qubes support for Whonix though!

Thanks Patrick for the info on Whonix internal IPs. Got it.

Update:

As previously mentioned, I’ve got the Whonix-Gateway and Whonix-Workstation VM images booted up and can operate the desktops fine as HardwareVMs (HVM).

But I’m presently bottlenecked on getting the basic networking infrastructure in place for the 2 Whonix HVMs.

By default Qubes allocates a single NIC (network interface card) to each HVM, which Debian/Whonix can use as (eth0).

The Whonix-Gateway HVM needs a 2nd NIC to establish an internal network between the Whonix-Gateway and Whonix-Workstation.

On the Qubes mailing list, I was recommended using the native Xen “xl” toolkit commands to attach and modify network interfaces for the HVMs.

Reference: https://groups.google.com/d/topic/qubes-users/RFXoZ3zt-PE

Example code to attach new network interface to Qubes VM:

xl network-attach VMNAME script=/etc/xen/scripts/vif-route-qubes ip=IPADDRESS backend=BACKEND_VMNAME

I’ve tested this out with a number of different VMs.

Xen consistently reports that a new network interface is attached.

However, depending upon the precise scenario, the VM OS will or will not be able to use it for establishing a second NIC (eth1) inside the VM OS.

It gives differing results depending on the varying scenarios.

Variables that seem to affect successful eth1 establishment:

  • VM OS Type (Debian, Whonix, etc)
  • PV Para-virtualized or HVM Fully-virtualized (Qubes Templates vs. HVM)
  • xl Command “backend=” Selection

One of the more interesting results I get is with the Whonix-Gateway 8.2 HVM where in the “xl network-attach” command, if I set the “backend=firewallvm” then I can establish eth1 in Whonix-Gateway, but if I set the “backend=whonix-workstation” then I cannot establish eth1 in Whonix-Gateway.

Same/similar issues when attempting to detach and attach Whonix-Workstation eth0 with “backend=whonix-gateway”.

Things should start to come alive with Qubes + Whonix once this base networking is established and the packets can flow.

Working to find a consistent and working method for getting the networking infrastructure setup for Whonix’s internal network between the Gateway and Workstation.

No luck yet. Still persisting.

I suggest learning how to do this without Whonix being involved first. This is because Whonix adds extra complexity such as Whonix’s firewall that may break your efforts.

Two plain Debian HVM’s may be better suited for these tests. Maybe for learning this also the existing Qubes OS Debian template AppVMs would suffice. When you are at this point, go back to trying it with Whonix. If it then still doesn’t work, the error should has been narrowed to Whonix.

[quote=“Patrick, post:49, topic:374”]I suggest learning how to do this without Whonix being involved first. This is because Whonix adds extra complexity such as Whonix’s firewall that may break your efforts.

Two plain Debian HVM’s may be better suited for these tests.[/quote]

Thanks for the astute suggestion Patrick. Yes, I have been using plain Debian as a baseline comparison for a number of my configuration tests, in order to isolate out any Whonix specific issues, including this one involving the networking. Checking baselines against Fedora, AppVMs vs. HVMs, when necessary, etc.

Some good news…

I’ve got PING packets flowing between the Whonix-Gateway HVM (eth1) and Whonix-Workstation HVM (eth0) and showing up to the other HVM’s adapter inside of Whonix (ifconfig).

This was accomplished using Qubes IP addresses (10.137.2.X).

I’ve so far only changed the IP configurations in “/etc/network/interfaces” to match the Qubes allocated IPs.

The “backend” for all of these HVM’s adapters is set to “firewallvm”. So all the packets from these HVMs are flowing through this firewallvm domain.

I then modified the iptables of the firewallvm to forward the packets between the HVMs, using the instructions provided by the Qubes firewall documentation.

sudo iptables -I FORWARD 2 -s <IP address of A> -d <IP address of B> -j ACCEPT

Reference: Redirecting…

And so some packets are flowing between the Whonix-Gateway HVM and Whonix-Workstation HVM.

Going to work on getting full Whonix Tor traffic flowing next. Getting closer! :slight_smile:

Update:

Here’s the situation I’m presently in with getting inter-VM Whonix traffic flowing:

I believe I’ve got the proper network adapters (NICs) established for Whonix VMs.

The present hangup is with getting Qubes IP and Whonix IP address configurations in sync with each other.

Potential Approaches:

  • Approach #1) Adapting Qubes IP configuration to Whonix-based IPs (xl network-detach / xl network-attach).

  • Issues: Using the Xen “xl” toolstack, I can seemingly get the IPs set to Whonix-based IPs for the network adapters. However, the traffic does not flow. Maybe this is just a matter of modifying the rules in iptables for the “firewallvm”.

  • Approach #2) Replacing Whonix IPs to Qubes IPs within running binary Whonix VM image.

  • Issues: When I grep the running Linux filesystem for Whonix IPs, there are a lot of them, and some in binary files. Unsure as to whether replacing all Whonix IPs within a running Whonix image would work that well. Also, maybe not a convenient and predictable process for reconfiguring when Qubes IPs get reallocated or new Whonix VMs are added.

  • Approach #3) Replacing Whonix IPs to Qubes IPs within pre-compile Whonix physical isolation build.

  • Issues: Too slow of a process anytime Qubes IPs get reallocated or new VMs are added.

I’m going to try hacking on the iptables rules for the firewallvm to see if I can open up Approach #1 here.

Here's the situation I'm presently in with getting inter-VM Whonix traffic flowing:

I believe I’ve got the proper network adapters (NICs) established for Whonix VMs.

The present hangup is with getting Qubes IP and Whonix IP address configurations in sync with each other.

  • Issues: Using the Xen “xl” toolstack, I can seemingly get the IPs set to Whonix-based IPs for the network adapters. However, the traffic does not flow. Maybe this is just a matter of modifying the rules in iptables for the “firewallvm”.

I advice a two step approach:
  1. disable Whonix’s firewall (comment out pre-up in /etc/network/interfaces and reboot) to see if traffic could flow should there be no firewall that disturbed it.
  2. re-enable Whonix’s firewall, perhaps modify it
- Approach #2) Replacing Whonix IPs to Qubes IPs within running binary Whonix VM image.
  • Issues: When I grep the running Linux filesystem for Whonix IPs, there are a lot of them, and some in binary files. Unsure as to whether replacing all Whonix IPs within a running Whonix image would work that well.

I don’t think you should/need grep the running Linux filesystem or that purpose. Would be better to do as advices in a previous blog post. grep the git version tag for the Whonix version you are using for the IPs to get a list of files where they are used. Then you can also change them in the running Linux filesystem. After reboot that should be as good as an installation with these changed IPs because there is no magic involved (Whonix - Overview).
- Approach #3) Replacing Whonix IPs to Qubes IPs within pre-compile Whonix physical isolation build.
  • Issues: Too slow of a process anytime Qubes IPs get reallocated or new VMs are added.

Indeed.
Also, maybe not a convenient and predictable process for reconfiguring when Qubes IPs get reallocated or new Whonix VMs are added.
Indeed.

Qubes OS’s TorVM sorted this out somehow but I never found time to look how.

Problem:

First some progress, I think I’ve now got normal traffic opened and flowing between Whonix-Workstation and Whonix-Gateway using “Approach #1) Adapting Qubes IP configuration to Whonix-based IPs”.

But I am unable to test it with normal Tor internet traffic, because, on the Whonix-Gateway, I am strangely unable to get an internet connection on eth0.

I can successfully get a DHCP and Static eth0 internet connection with Qubes using my baseline plain Debian HVMs.

I’ve tried DHCP and Static in “/etc/network/interfaces”, then reset network connection using:

sudo /etc/init.d/networking restart

During network restart, DHCPDISCOVER process fails with:

DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 7
...
No DHCPOFFERS received.
No working leases in persistent database - sleeping.

Commenting out DHCP in “/etc/network/interfaces” and matching static network settings to Qubes IPs doesn’t work either after network restart. But it does in other plain Debian HVMs.

Such as:

auto eth0
iface eth0 inet static
address 10.137.2.16
netmask 255.255.255.0
gateway 10.137.2.1
broadcast 10.137.2.255
# pre-up /usr/bin/whonix_firewall (tried commented and not commented)

Seems to be a Whonix related issue on Qubes for the Whonix-Gateway not making internet connection on eth0.

Also, on the Whonix-Gateway, I opened “/etc/whonix_firewall.d/30_default” and lifted all of the “dangerous” restrictions to try and get a basic internet connection working on the Gateway.

GATEWAY_TRANSPARENT_TCP=1
GATEWAY_TRANSPARENT_UDP=1
GATEWAY_TRANSPARENT_DNS=1
ALLOW_GATEWAY_ROOT_USER=1
ALLOW_GATEWAY_USER_USER=1

And reloaded Whonix firewall:

sudo /usr/bin/whonix_firewall

No success on internet connection for Whonix-Gateway.

If I can get this internet connection working on the Whonix-Gateway, then I might just have a fully functional Qubes + Whonix system up and running at this stage.

This could be related. Recent change in whonix-gw-firewall that will be introduced in version 0.4. Pretty simple change:
https://github.com/Whonix/whonix-gw-firewall/commit/827ea83bed41256bda33ab69c173df2878966a30

In other words you need to allow traffic to the dhcp server.

In /etc/whonix_firewall.d/50_user use something like:
NON_TOR_GATEWAY=“192.168.1.0/24 192.168.0.0/24 127.0.0.0/8 10.152.152.0/24 10.0.2.2/24”

And replace 10.0.2.2 with the IP of the dhcp server.

[quote=“Patrick, post:54, topic:374”]This could be related. Recent change in whonix-gw-firewall that will be introduced in version 0.4. Pretty simple change:
https://github.com/Whonix/whonix-gw-firewall/commit/827ea83bed41256bda33ab69c173df2878966a30

In other words you need to allow traffic to the dhcp server.

In /etc/whonix_firewall.d/50_user use something like:
NON_TOR_GATEWAY=“192.168.1.0/24 192.168.0.0/24 127.0.0.0/8 10.152.152.0/24 10.0.2.2/24”

And replace 10.0.2.2 with the IP of the dhcp server.[/quote]

Did not work, using Whonix 8.2 images in Qubes HVM.

In my working plain Debian Wheezy HVM, I found the DHCP server IP to be “10.137.2.254” with this command:

sudo grep -R "DHCPOFFER" /var/log/*

Reported several instances of:

debian dhclient: DHCPOFFER from 10.137.2.254

In the Whonx-Gateway, I created a new file “/etc/whonix_firewall.d/50_user”.

Tried it several different ways in “50_user”:

NON_TOR_GATEWAY="192.168.1.0/24 192.168.0.0/24 127.0.0.0/8 10.152.152.0/24 10.137.2.254/24"

NON_TOR_GATEWAY="192.168.1.0/24 192.168.0.0/24 127.0.0.0/8 10.152.152.0/24 10.137.2.0/24"

NON_TOR_GATEWAY="192.168.1.0/24 192.168.0.0/24 127.0.0.0/8 10.152.152.0/24 10.137.2.1/24"

NON_TOR_GATEWAY="192.168.1.0/24 192.168.0.0/24 127.0.0.0/8 10.152.152.0/24 10.137.2.254"

NON_TOR_GATEWAY="192.168.1.0/24 192.168.0.0/24 127.0.0.0/8 10.152.152.0/24 10.137.2.254 255.255.255.255"

etc

Also tried putting this “NON_TOR_GATEWAY” command into “/etc/whonix_firewall.d/30_default”.

Also tried modifying the original “NON_TOR_GATEWAY” command in “/usr/bin/whonix_firewall”.

Also tried blanking the entire “/usr/bin/whonix_firewall” file contents with white space.

Each time, I reset the Whonix firewall and then reset the network connection.

sudo /usr/bin/whonix_firewall
sudo /etc/init.d/networking restart

Have tried with DHCP settings in “/etc/network/interfaces”.

auto eth0
iface eth0 inet dhcp

Have tried with Static settings in “/etc/network/interfaces”.

auto eth0
iface eth0 inet static
address 10.137.2.16
netmask 255.255.255.0
gateway 10.137.2.1
broadcast 10.137.2.255

No success.

Yet, as mentioned, both DHCP and Static networking works fine with plain Debian Wheezy in Qubes HVM.

Also tried blanking the entire "/usr/bin/whonix_firewall" file contents with white space.
This probably won't work. I said in /etc/network/interfaces you can comment out the pre-up if you want to entirely disable it, because when /usr/bin/whonix_firewall does not exit 0, network will not come up. The shortest possible /usr/bin/whonix_firewall would be

#!/bin/bash exit 0

and be executable. Writing the “exit 0” right below “#!/bin/bash” archives the same. Blanking the file is not required.

So when Whonix’s firewall is disabled, dhcp / networking does fully work?

If not, you can isolate the offending package by starting with a plain Debian VM and then installing Whonix network related settings packages one by one or maybe also by starting with a Whonix-Gateway VM and uninstalling network related packages one by one. Alternative to uninstalling you could also look what the package is actually doing and manually undo it to have the same effect.

The full package list can be found here:

(~5 pages; ~100 packages, only a fraction related to network)

Main gateway networking related packages are that might cause this:

anon-gw-anonymizer-config
whonix-gw-firewall
ipv4-forward-disable
ipv6-disable
anon-gw-dhcp-conf
anon-gw-dns-conf
whonix-gw-network-conf
uwt

To automatically manage package dependencies, you can install them either from official Whonix APT repository from testers (!) repository:

Instructions similar to:

(Drop the uwt if you don’t have uwt installed yet. [In the line using apt-get, drop everything before apt-get.])

Alternatively you could also create a local apt repository and install one by one from there. We have scripts to mostly automate it but they need some editing and documentation is scare (because we don’t support this use case yet).

Tried that too.

[quote=“Patrick, post:56, topic:374”]because when /usr/bin/whonix_firewall does not exit 0, network will not come up. The shortest possible /usr/bin/whonix_firewall would be

#!/bin/bash exit 0[/quote]

Good to know. Just tired that too.

On Whonix-Gateway DHCP does not seem to work under any condition, either with whonix_firewall enabled or disabled.

Maybe this could have something to do with it?..

Even though we do have a small DHCP server (that runs inside HVM untrusted stub domain) to make the manual network configuration not necessary for many VMs, this won't work for most modern Linux distributions which contain Xen networking PV drivers built in (but not Qubes tools) and which bypass the stub-domain networking (their net frontends connect directly to the net backend in the netvm), and so our DHCP server is not useful.
https://qubes-os.org/wiki/HvmCreate

But DHCP does work successfully in my baseline plain Debian Wheezy HVM.

When attempting DHCPDISCOVER is says:

DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 7

Maybe this call to “255.255.255.255 port 67” is an issue for Whonix in Qubes HVM?

Interesting result…

With Static networking on the Whonix-Gateway eth0, and opening up iptables in the firewallvm, I did some “curl” calls to test Apache webservers I installed in other HVMs on the Qubes system.

My Whonix-Gateway had the following Static IP configuration in “/etc/network/interfaces”:

auto eth0
iface eth0 inet static
address 10.137.2.16
netmask 255.255.255.0
gateway 10.137.2.1
broadcast 10.137.2.255

When I make a “curl” call to another plain Debian Wheezy HVM (IP: 10.137.2.19) running Apache webserver, it does not work from the Whonix-Gateway, even though it works from other HVMs, with iptables adjusted in firewallvm.

curl.whonix-orig 10.137.2.19

However, the interesting part happens when I do the same “curl” call to the Whonix-Workstation HVM (IP: 192.168.0.11) from the statically configured Whonix-Gateway eth0 (IP: 10.137.2.16), it actually goes through and successfully downloads the test Apache webserver page “It works!”.

curl.whonix-orig 192.168.0.11

So this test suggests to me that Static networking on Whonix-Gateway (eth0) is seemingly working, at least partially, since it was able to go fetch a webpage from another HVM on port 80.

During this test, the Whonix-Gateway whonix_firewall was disabled, and there was no eth1 adapter present on the system.

But for whatever reason, the Whonix-Gateway is unable to reach the internet or other 10.137.2.X IP addresses. Yet with Static (10.137.2.X) networking it successfully accessed an Apache webserver at the Whonix-Workstation HVM (192.168.0.11) though.

[quote=“Patrick, post:56, topic:374”]If not, you can isolate the offending package by starting with a plain Debian VM and then installing Whonix network related settings packages one by one or maybe also by starting with a Whonix-Gateway VM and uninstalling network related packages one by one. Alternative to uninstalling you could also look what the package is actually doing and manually undo it to have the same effect.

…[/quote]

Thanks Patrick. I may have to get into this networking package isolation approach next.

Success!

Qubes + Whonix is now officially working on my system.

Full Tor traffic is flowing from Whonix-Workstation HVM to Whonix-Gateway HVM to the internet, all inside of Qubes OS.

Currently using:

  • Qubes OS R2rc2
  • Whonix 8.2

I’ve presently got Qubes + Whonix fully functional for the very first time, just now.

I’m going to have to do some optimization work now to streamline the process a bit. Plus I should do some leak testing, etc.

Then I will be sure to do a full step by step instructional write-up for everyone.

Maybe such Qubes + Whonix instructions could be incorporated into the Whonix Wiki somewhere?

And from there, we can work on further fixes, streamlining, and expanding out to other versions and approaches for Qubes + Whonix.

Thanks for everyone’s help so far! Especially Patrick!

As of today, Qubes + Whonix is finally now a reality. :smiley:

Congratulations!

[quote=“WhonixQubes, post:58, topic:374”]Then I will be sure to do a full step by step instructional write-up for everyone.

Maybe such Qubes + Whonix instructions could be incorporated into the Whonix Wiki somewhere?[/quote]
I’d appreciate and hope for that VERY MUCH!

We already have:

So let’s call that page

Unless, you have a better suggestion, of course.

Thanks! You’ve been a BIG help, Patrick. Very much appreciated.

[quote=“Patrick, post:59, topic:374”][quote author=WhonixQubes link=topic=392.msg3584#msg3584 date=1408740179]
Then I will be sure to do a full step by step instructional write-up for everyone.

Maybe such Qubes + Whonix instructions could be incorporated into the Whonix Wiki somewhere?
[/quote]
I’d appreciate and hope for that VERY MUCH![/quote]

Will absolutely make sure to do this for the Whonix community.

I’d like to see and help make Qubes + Whonix development go even further. Hopefully others can start jumping on the Qubes + Whonix bandwagon. Getting step by step instructions out there will hopefully get more people using and contributing to Qubes + Whonix.

[quote=“Patrick, post:59, topic:374”]We already have:

So let’s call that page

Unless, you have a better suggestion, of course.[/quote]

Thanks Patrick!

Small suggestion…

Maybe a “cleaner”, more streamlined URL would simply be:

without the “_OS” in the URL.

Basis of reasoning:

  • Cleaner URL for referring people to.
  • Matches one word look of the other virtualization platform pages you mentioned.
  • Qubes developers and others refer to Qubes OS as just “Qubes” a lot anyway.
  • Joanna explicitly talked about Qubes without the “OS” suffix in her Odyssey Framework blog about moving Qubes OS beyond just being Qubes as a standalone OS. Quoted below:
There is also one more “limitation” of Qubes OS, particularly difficult to overcome... Namely that it is a standalone Operating System, not an application that could be installed inside the user's existing OS. While installing a new application that increases system's security is a no-brianer for most people, switching to a new, exotic OS, is quite a different story...

So, what is Qubes then? Qubes (note how I’ve suddenly dropped the OS suffix) is several things:

Turns out this is not as difficult as we originally thought, and this is exactly the direction we’re taking right now with Qubes Odyssey!

So, we might imagine Qubes that is based on Hyper-V or even Virtual Box or VMWare Workstation. In the case of the last two Qubes would no longer be a standalone OS, but rather an “application” that one installs on top of an existing OS, such as Windows.

Ultimately I am fine with the page either way.

Thanks again!