Qubes + Whonix

That’s good news to hear! Thanks Patrick.

Based on that being the case, I’ll probably go for Whonix HVM (Debian Wheezy) without Qubes Tools first, and then expand out from there with either Qubes Tools Debian Wheezy integration or Debian Jessie Whonix compatibility.

I need to get some additional downloads into my Qubes OS computer to proceed, but don’t want to subject them to the clearnet. So I am going to first setup a separate box with Whonix-Gateway on it to run my Qubes OS downloads behind.

I’m going to be offline for a little while and then back here next week to pick things up again.

Qubes OS R3 (The Invisible Things Lab's blog: Qubes OS R3 Alpha preview: Odyssey HAL in action!) talks about libvirt support. Has this become reality and is included in latest recommended version of Qubes OS? Probably yes?

This would be very good news and simplify the Qubes OS endeavor a lot, I think.

There has been quite a lot activity towards Whonix libvirt / KVM support lately.

And if Qubes OS uses libvirt as well, you might be able to use our libvirt xml files (the VM description files, that configure what hardware, network interfaces, network configuration and so forth) (https://github.com/Whonix/Whonix/tree/master/libvirt) for Qubes OS. Maybe a few changes required, maybe not. But if this works out, this would ease Qubes OS support a lot.

Some Qubes OS development notes I almost forgot about:

Hi everybody… I’m back online and back to work on the Qubes + Whonix project.

Getting my Whonix-Gateway box setup, running some downloads into Qubes, then will be live testing the details of various Qubes solutions.

Excited to get this working soon!

Interesting Patrick. I’m not sure whether any of the R2 releases include the libvirt support or not. I know the blog post was geared towards R3 with the “Odyssey Framework”, though. Although, the post was from over a year ago, so maybe this libvirt support has been since incorporated into R2?

[quote=“Patrick, post:22, topic:374”]This would be very good news and simplify the Qubes OS endeavor a lot, I think.

There has been quite a lot activity towards Whonix libvirt / KVM support lately.

And if Qubes OS uses libvirt as well, you might be able to use our libvirt xml files…

But if this works out, this would ease Qubes OS support a lot.[/quote]

Thank you, Patrick. I will be sure to keep libvirt / KVM on my radar as a potential approach.

Although I am less inclined to want to use KVM due to the reasons mentioned by the Qubes developers for choosing Xen over KVM.

Discussed in the Qubes Architecture Spec document:

http://files.qubes-os.org/files/doc/arch-spec-0.3.pdf

3.2. Xen vs. KVM security architecture comparison

Summary

We believe that the Xen hypervisor architecture better suits the needs of our project. Xen hypervisor is very small comparing to Linux kernel, which makes it substantially easier to audit for security problems. Xen al-lows to move most of the “world-facing” code out of Dom0, including the I/O emulator, networking code and many drivers, leaving very slim interface between other VMs and Dom0. Xenʼs support for driver domain is crucial in Qubes OS architecture.

KVM relies on the Linux kernel to provide isolation, e.g. for the I/O emulator process, which we believe is not as secure as Xenʼs isolation based on virtualization enforced by thin hypervisor. KVM also doesnʼt support driver domains

Wow. Looks like a new official Qubes version was just released for download (Qubes R2rc2).

And an accompanying blog post that has a few relevant pieces of info in it to the Qubes + Whonix endeavor.

Regarding Debian Template:

Speaking of different Linux distros -- we have also recently built and released an experimental (“beta”) Debian template for Qubes AppVMs, a popular request expressed by our users for quite some time. It can be readily installed with just one command, as described in the wiki. It is supposed to behave as a first class Qubes AppVM with all the Qubes signature VM integration features, such as seamless GUI virtualization, secure clipboard, secure file copy, and other integration, all working out of the box. Special thanks to our community contributors for providing most of the patches required for porting of our agents and other scripts to Debian. This template is currently provided via our templates-community repo, but it nevertheless has been built and signed by ITL, and is also configured to fetch updates (for Qubes tools) from our server, but we look forward for somebody from the community to take over from us the maintenance (building, testing) of the updates for this template.

https://wiki.qubes-os.org/wiki/Templates/Debian

Known issues

Probably not working as netvm or proxyvm (untested as of today)

Seems to be that the current Qubes Debian Template is presently only suited for AppVMs.

So not sure if a Whonix-Gateway Qubes ProxyVM is workable yet with the Debian Template.

Regarding Qubes R3:

The R3 release (Odyssey-based), whose early code is planned to be released just after the "final" R2, so sometime in September, is all about bringing us closer to that "automatic transmission" version.

Now I will have to go get this new version of Qubes R2rc2 installed on my machine for our Qubes + Whonix work.

[quote=“z, post:15, topic:374”]An open question, most likely to Patrick, does Qubes provide new opportunities for connection chaining? Like could it make it easy to chain connections (i2p-Gateway, VPN-Gateway, etc.) with also much better leakproof firewalls between them.

Could Whonix Gateway become a template for Qubes where you could choose the connection type for each Whonix Gateway (proxy,vpn,i2p,tor…) and chain easily as you like? Or does Qubes doesn’t make things any easier in this context?[/quote]
I had this in mind as stackable gateways. This should work equally with any virtualizer. I don’t know, maybe Qubes OS GUI features make this easier, no idea.

I didn’t suggest using KVM to you. Libvirt is an abstraction layer. The same libvirt VM description xml file can describe how a VM should configured (virtual hdds, virtual network cards and so forth) for VirtualBox, KVM, Xen, Qemu and perhaps even Qubes OS (Xen). Only one line changes. So when I suggest having a look at our libvirt files for KVM, I am not suggesting to use KVM. You change “kvm” to “xen” and maybe(!) that’s all you need to configure the VM for Qubes OS.

I'm not sure whether any of the R2 releases include the libvirt support or not. I know the blog post was geared towards R3 with the "Odyssey Framework", though. Although, the post was from over a year ago, so maybe this libvirt support has been since incorporated into R2?

Maybe not. They write in the blog post your linked.

The R3 release (Odyssey-based), whose early code is planned to be released just after the "final" R2

Maybe read it again in context. Looks like they’ll be using libvirt in R3. But maybe Odyssey-based or not doesn’t matter. Since Qubes OS boots some kind of Fedora derivative as I understand, you would suppose, you can install libvirt in dom0. If it is not already installed by default. And once libvirt is installed, change our libvirt xml files from kvm to xen. Maybe that’s all.

Ahh ok. I got you. Yes, I knew libvert was a separate layer from KVM. I wasn’t sure if the Whonix work had been made “too” KVM specific or not. Your informative follow-up here checks that concern for me and that is good news. Patrick, thank you for making me more aware of this option. Will look into it with a higher priority now.

Interesting notion on installing libvirt in R2 dom0 and using our current Whonix with it. I’m not very familiar with libvirt yet, but, the basic concept you’re talking about makes sense to me. As I get into detailed testing this week, I will certainly consider this approach. Thank you! :slight_smile:

  • Just as a follow up.
  • By no means it should say anything more than it says. So just continue what you’re doing. No need to switch your approach.
  • Whonix’s Debian packages are now compatible with Debian jessie as I was able to build a Whonix image with Debian testing (frozen snapshot.debian.org) sources using git tag 8.6.4.8 and --testing-frozen-sources build script switch.
  • However, there is still no official support for Debian jessie.
  • Packages could all be build and were installable. But they are not tested yet.
  • Most should work. Maybe sclockad [component of sdwdate] won’t work due to dependencies. Needs testing.
  • Full image build is not yet bootable due to some newly introduced bug in grub in jessie. (“debian jessie based build does not boot [grub bug] - error: file ‘/grub/i386-pc/normal.mod’ not found” https://github.com/Whonix/Whonix/issues/263) - But this should not be of concern for Qubes + Whonix endeavor, because you start with a bootable image [and don’t need 486 compatibility kernel anyway].
  • WIll do some more Debian jessie Whonix package compatibility testing (https://github.com/Whonix/Whonix/issues/263).

[quote=“Patrick, post:29, topic:374”]- Just as a follow up.

  • By no means it should say anything more than it says. So just continue what you’re doing. No need to switch your approach.
  • Whonix’s Debian packages are now compatible with Debian jessie…[/quote]

Thanks for the update, Patrick!

Good news to hear this Debian Jessie compatibility progress for Whonix.

When I get to testing this approach, I will be sure to make use of this Whonix version or newer.

Will try to build 8.6.4.8 using Debian stable frozen sources. Then boot it and update to testing (jessie). Will be the way a good test for "Debian jessie package compatibility" (#243).

Succeeded with Whonix-Gateway as well as Whonix-Workstation.

Nevertheless, Debian testing isn’t well suited for Whonix end users. Whonix 7 was based on Debian testing. Was an okay choice at that time. And it was a nightmare. Packages were constantly updated and users were complaining what all those packages are about. VM builds or physical isolation builds from source code constantly broke due to changes or newly introduced bugs in Debian testing. While none of the changes was security related, they were annoying and support intensive. Keeping Whonix compatible with Debian testing is a good development goal - because in future Whonix must switch to that version when it becomes the new stable. However, for end users we’re much better off producing builds based on Debian stable. So if you plan to go that route, and want to support it for end users, I advice to create a Qubes OS template based on Debian stable first.

Succeeded with Whonix-Gateway as well as Whonix-Workstation.[/quote]

Nice, Patrick. Thanks for the notice.

Yes, I do remember the Whonix 7 days, based on Debian testing. I was able to parse and hold the individual packages that broke things for myself, so as to not experience any killer bugs. However, I agree that Debian testing is not very compatible with normal end users.

I would like to see a consistent, stable, turnkey implementation of Qubes + Whonix available.

My upfront immediate intentions are more geared towards my personal need to make Qubes + Whonix for my own projects ASAP. But beyond this immediate-term need, I would also personally want a more consistent, stable, turnkey implementation for myself and the community at-large.

Currently, I’ve got my entire Qubes R2rc2 machine setup, fully updated, with Qubes Debian Template loaded as well. I’m now into the detailed implementation of various approaches:

Qubes + Whonix approaches I’m actively working on or towards pursuing now:

  • Qubes HVM + Whonix .qcow2 Images

  • Qubes HVM + Debian Wheezy Install + Whonix Physical Isolation Build

  • Qubes Debian Jessie Template + Whonix Jessie Code

  • Qubes HVM + Whonix Libvirt Files

Will post further updates as I learn relevant insights and make progress on these approaches.

xen and qubesos requre new hardware such as intel i5, i7 and new mainboards all of which possess extensive hardware tracking capabilities rendering these qubesos and whonix useless. For more details read Whonix Forum

So far, I’ve primarily been working on the approach of importing Whonix VM disks into Qubes.

I’ve worked with the Whonix 8.2 (.ova/.vmdk & .qcow2) and Whonix 8.6.2.8 (.ova/.vmdk & .libvirt/.qcow2) downloads.

After some trial and error, I’ve been able to successfully extract, convert to raw .img, transfer into Qubes dom0, and create new Qubes Standalone HVMs with these raw .img files.

But these Whonix VM images ultimately do not seem to work with Qubes HVMs, at least unmodified in raw format.

The HVM begins to boot up fine and the normal blue GNU GRUB screen shows up with the various kernel boot options.

After the “3.2.0-4-686-pae” option proceeds on the GNU GRUB screen, then a lot of startup boot text starts scrolling.

This startup process gets to a point of the following error text:

Gave up waiting for root device. Common problems: - Boot args (cat /proc/cmdline) - Check rootdelay= (did the system wait for the right device?) - Check root= (did the system wait for the right device?) - Missing modules (cat /proc/modules; Is /dev) ALERT! /dev/sda1 does not exist. Dropping to shell!

Then it goes into a BusyBox shell prompt without booting the Debian/Whonix OS

BusyBox v1.20.2 (Debian 1:1.20.0-7) built-in shell (ash)

(initramfs)

It seems as that the issue is due to this lack of “/dev/sda1” device when booting in Qubes HVM.

ALERT! /dev/sda1 does not exist. Dropping to shell!

When doing a “ls /dev” in VirtualBox Whonix, the “/dev/sda” and “/dev/sda1” are present. In Qubes they are not present. Maybe Whonix is programmed to explicitly look for this “/dev/sda1” device to boot from, but somehow unavailable in Qubes?

In Qubes the “/dev/xvda”, “/dev/xvda1”, “/dev/xvdb” devices are present instead.

Overview of VM block devices:
https://wiki.qubes-os.org/wiki/TemplateImplementation

In another test of mine, I was able to convert a standard (non-Whonix) Debian Wheezy VM .vmdk disk to raw .img and able to successfully boot it up as fully operational in a Qubes Standalone HVM. This had “/dev/sdaX” devices present in the OS though.

Ultimately not sure exactly why the Whonix VM download images are not fully booting up in Qubes. Though it seems to be an issue with these disk devices inside of Whonix/Debian while using Qubes.

Maybe someone more familiar with Linux / Whonix / Qubes would know what the issue is and it can be easily resolved, like maybe by making Whonix code work with Qubes disk expectations?

Or maybe it’s not worth it and proceeding to the approach of Qubes HVM + Debian Wheezy Install + Whonix Physical Isolation Build is more workable?

Looks like others have experienced this “sda vs xvda” bug with Xen, which Qubes R2 is based upon.

Searching a phrase like this will show relevant discussions about this…

“xen xvda sda”

It looks as though the Whonix VMs are programmed to boot up “/dev/sda1”, but Qubes Xen doesn’t deal with “sdaX” devices.

In the BusyBox shell prompt of the failed Whonix VM in a Qubes VM, I ran the following command as the error suggested:

cat /proc/cmdline

It came back with the following response:

BOOT_IMAGE=/boot/vmlinuz-3.2.0-4-686-pae root=/dev/sda1 ro vga=0x0317 apparmor=1 security=apparmor

So, again, Whonix VM seems to be trying to boot root device “/dev/sda1”, but Xen deals with “xvda” devices.

Very interesting…

In the initial blue screen GNU GRUB boot options, if instead of booting the default “Linux 3.2.0-4-686-pae” kernel option, I instead select the “Linux 3.2.0-4-486” kernel option, the Whonix VM desktop does SUCCESSFULLY boot up.

Maybe this is due to not using the “pae” feature (Physical Address Extension), where maybe the PAE feature looks for a non-virtual “sda” based disk, but without PAE, it automatically accepts the “xvda” based disk that Xen offers to it.

But it seems that the 486 kernel option is not desirable.

I think I read that this kernel only utilizes 1 processor core. Also I believe Whonix guides recommend PAE in VirtualBox, maybe for additional reasons.

Will continue working on this “sda vs xvda” root boot device issue…

I was just able to successfully get the Whonix VM to boot in a Qubes Standalone HVM using the “Linux 3.2.0-4-686-pae” kernel option in the GNU GRUB boot loader.

How I did this…

At the initial blue screen GNU GRUB boot loader, at the bottom, it says:

Press enter to boot the selected OS, 'e' to edit the commands before booting or 'c' for a command-line.

So while the “Linux 3.2.0-4-686-pae” kernel option was highlighted, I pressed “e” (for edit) and the following configuration line was included:

linux /boot/vmlinuz-3.2.0-4-686-pae root=/dev/sda1 ro vga=0x0317 apparmor=1 security=apparmor

Then I just change the root boot device configuration from:

root=/dev/sda1

to…

root=/dev/xvda1

and hit “Ctrl + x” to boot with this configuration.

The Whonix visual desktop then boots up in the Qubes HVM.

However, this configuration option does seem to persist after VM shutdown.

Not sure what the best long-term fix would be.

At least I’ve been able to figure out the issue and a workaround so far to make the Whonix VM images bootable in a Qubes VM.

Also, I don’t have the Whonix-Gateway and Whonix-Workstation working together yet (networking). Just got the Whonix VMs booting up to their desktops so far.

Going to work on networking configuration for Whonix-Gateway and Whonix-Workstation next.

Encouraging results to see the Whonix desktops in Qubes at this point though!

I figured out how to change the root boot device from “/dev/sda1” to “/dev/xvda1” and have it persist beyond shutdown.

As a root user you simply edit the “/boot/grub/grub.cfg” file and change all instances of “/dev/sda1” to “/dev/xvda1”.

This will allow the Whonix VM to boot up properly in Xen/Qubes automatically each time.

The good news is, this issue will most likely be fixed in next Whonix version.

[When I put stuff into [ … ] it is just for your curiosity. Not important for what you’re working on in this thread.]

Great that you succeeded manually booting them using grub boot menu. After doing this there is no need to manually fix /boot/grub/grub.cfg. Running

sudo update-grub

should fix it as a manual workaround until new images are released.

The version you are using indeed does hardcode /dev/sda1.

[
Historically this is because creating a Debian/grub bootable (raw) image is quite difficult.
Devs of grml-debootstrap (GitHub - grml/grml-debootstrap: wrapper around debootstrap) which Whonix uses during its build process have figured it out fortunately. Although by hardcoding /dev/sda1.

Difficult:
https://github.com/grml/grml-debootstrap/blob/master/bootgrub.mksh

You’re lucky that HulaHoop and I worked hard to get the image booting whether your system has /dev/sda or /dev/vda [a kvm thing] by using UUIDs. (
Whonix Forum) And the UUID method will most likely also work for /dev/xvda [a xen thing].

It has been fixed in Whonix:
https://github.com/Whonix/anon-shared-build-fix-grub/commit/e7e50d18f6269eb983cc36f469478478e10815b8

And as a bonus a git pull request has been submitted to grml-debootstrap:

]

If it would be helpful, I could upload newer testers-only images with this and more fixes.

Nice to hear it.

Got it. Nice convention.

[quote=“Patrick, post:39, topic:374”]Running

sudo update-grub

should fix it as a manual workaround until new images are released.[/quote]

Good to know. Thanks.

That confirms my thinking. Thanks.

[quote=“Patrick, post:39, topic:374”]You’re lucky that HulaHoop and I worked hard to get the image booting whether your system has /dev/sda or /dev/vda [a kvm thing] by using UUIDs. (
Whonix Forum) And the UUID method will most likely also work for /dev/xvda [a xen thing].[/quote]

Awesome. Thank you guys for doing this work!

I would test out the new testers-only images if you felt like publishing them. Don’t feel obligated though, Patrick.

Thanks!