[Help Welcome] KVM Development - staying the course

Figured out this issue, this is seems to be Debian (or Kernel) VS my PC issue:

If TPM activated from the BIOS:

It will appear at the beginning of the OS booting (quickly disappear):

kernel: x86/cpu: VMX (outside TXT) disabled by BIOS

To solve it one need to disable TPM feature:

Dunno if this is reported upstream or not.

1 Like
1 Like

Can you test some KVM with EFI please? @HulaHoop

KVM EFI support might have considerably improved meanwhile. For example, Debian nowadays can be easily installed on EFI and even SecureBoot enabled systems.

Therefore would be good to test both, EFI and SecureBoot.

Then making any changes to the Whonix libvirt KVM config files to support EFI, SecureBoot.

Yet to be decided if EFI (and maybe later SecureBoot) will become the new default for Whonix VMs as per:

1 Like

If there is anything that needs help with or requires testing please do let me know

1 Like

Btw please Follow Whonix Developments for news. If there are major testers wanted announcements, these will be posted in the news forums.

1 Like

Could vagrant be a solution to the missing .ova appliance feature for libvirt / KVM? Vagrant supports .box files.

vagrant is available in Debian.

https://wiki.debian.org/Vagrant

Supports libvirt. Package in Debian:

Contains:

  • /usr/share/doc/vagrant-libvirt/examples/create_box.sh
    

/usr/share/doc/vagrant-libvirt/examples/create_box.sh IMAGE [BOX] [Vagrantfile.add]

Package a qcow2 image into a vagrant-libvirt reusable box

Needs…?

  • metadata.json
  • Vagrantfile

I am not sure Vagrant can be feed existing libvirt XML files or needs its own format?

But there could be limitations.

please review:
KVM: Difference between revisions - Whonix

2 Likes

Long standing KVM issue:

VM internal traffic is visible on the host for network sniffers such as wireshark, tshark as well as iptables (and therefore by extension also corridor)

This is an issue because of this corridor (or something similar if invented such as perhaps Whonix-Host KVM Firewall) cannot be used as an additional leak test or Tor whitelisting gateway.

Quote myself from Whonix on Mac M1 (ARM) - Development Discussion - #35 by Patrick

As hubport option might be much more secure similar and also apply to Whonix KVM.

This is very important, needs most attention to get right to avoid IP leaks.

From the UTM config files. Relevant options:

Whonix-Gateway

-device virtio-net-pci,netdev=external
-device virtio-net-pci,netdev=internal
-netdev user,id=external,ipv6=off,net=10.0.2.0/24
-netdev socket,id=internal,listen=:8010

Whonix-Workstation

-device virtio-net-pci,netdev=internal
-netdev socket,id=internal,connect=127.0.0.1:8010

Doesn’t look crazy. Related documentation:
Documentation/Networking - QEMU

But it has the same issue that KVM has. VM internal traffic is visible on the host for network sniffers such as wireshark, tshark.

This has lead in the past to a failure of configuring corridor on a Debian host with Whonix KVM.

references:

GitHub - rustybird/corridor: Tor traffic whitelisting gateway

testing on Debian host · Issue #28 · rustybird/corridor · GitHub

related:
Using corridor, a Tor traffic whitelisting gateway with Whonix ™

So it would be much better if KVM / QEMU (UTM) would hide this from the host operating system. I.e. encapsulate the internal networking better. ChatGPT says this is possible using the hubport option but ChatGPT unfortunately sometimes talkes nonsense. Could you look into it please?

ChatGPT

1 Like

Pending changes piling up.

KVM: Difference between revisions - Whonix

1 Like

Missing documentation:
As mentioned in How to use Whonix-Gateway KVM for any other VM, operating system (Whonix-Custom-Workstation)? using Anonymize Other Operating Systems is undocumented for Whonix KVM.

1 Like

@Patrick please check inbox

If the user is still resorting to the commandline at any point in the import process then no. Even if the number of commands is cut down, the overhead of maintaining another abstraction layer isn’t justified IMO unless the UX is significantly better.

I’m not sure what the implications are exactly of the change, but I’d reckon that piping guest traffic through localhost is a breakdown of the security guarantees of IP isolation and having it on its own private subnet. Unless there’s a precedent of this being done on other virtualizer platforms that I’m not aware of (and having been successfully leaktested of course), I’d steer clear.

That is the issue now with Whonix KVM…

Whonix KVM at time of writing:

  • VM traffic visible on the host: yes
  • Is that a problem? yes
  • What is broken? corridor; and host VPN fail closed mechanisms.
  • Potential solution, in theory: hubport

Whonix VirtualBox / Qubes-Whonix at time of writing:

  • VM traffic visible on the host: no
  • Is that a problem? no
  • What is broken? Nothing.
  • Leak tested: yes

No, that’s not what I’m seeing in my VM qemu log found in /var/log/libvirt/qemu. There’s no mention of my NIC listening on localhost just the spice server

What is UTM?

UTM is a full featured system emulator and virtual machine host for iOS and macOS. It is based off of QEMU. In short, it allows you to run Windows, Linux, and more on your Mac, iPhone, and iPad. Check out the links on the left for more information.

The way this MacOS based emulator chooses to manipulate QEMU in order to simulate the virtual environment is quite different to how KVM uses QEMU on Linux. I think this is a case of ChatGPT taking us for a ride.

The log is irrelevant. It’s not a listen port. It’s the the virtual network interfaces created by KVM on the host operating system.

VM internal traffic is visible on the host for network sniffers such as wireshark, tshark as well as iptables (and therefore by extension also corridor).

… as we’ve wondered years ago probably and as you’ve seen years ago in tshark but finding that message would be challenging.

This is why firewall rules on the host (for example by corridor or VPN fail closed firewalls) can break Whonix-Workstation traffic.

These so far are the facts and these are unrelated to ChatGPT.

UTM is also irrelevant.

libvirt is a wrapper around QEMU. virsh domxml-to-native can the translate an XML file to the actual QEMU comnand line that will be executed.
(Audit Output of virsh domxml-to-native)

So if QEMU supports hubport, it can be assumed that such an important option is also available in libvirt.
(In theory, very recently added or obscure QEMU options might not exist in libvirt.)
In the worst case, libvirt: QEMU command-line passthrough could be used.

KVM is also just a QEMU command line. Just with different command line options. From perspective of the final and actually executed QEMU command line the difference is just a few command line parameters.

The problem is this:

<bridge name='virbr2' stp='on' delay='0'/>

The bridge virbr2 will be visible as a network interface on the host operating system for tools such as sudo ifconfig. This is the whole crux about Whonix KVM.

Avoiding this would make a lot issues vanish.


So the easy one first… Currently:

<network>
  <name>Whonix-External</name>
  <forward mode='nat'/>
  <bridge name='virbr1' stp='on' delay='0'/>
  <ip address='10.0.2.2' netmask='255.255.255.0'/>
</network>

Is <bridge name='virbr1' stp='on' delay='0'/> strictly required? According to https://chat.openai.com/share/5d7c6ee9-a1ea-459f-9b50-19f2f03fe2a4 it is not.

Could you try without <bridge name='virbr1' stp='on' delay='0'/> please? Potential alternative:

<network>
  <name>Whonix-External</name>
  <forward mode='nat'/>
  <ip address='10.0.2.2' netmask='255.255.255.0'/>
</network>

If that does not work, any other options?


This one maybe harder… Currently:

<network>
  <name>Whonix-Internal</name>
  <bridge name='virbr2' stp='on' delay='0'/>
</network>

Isolated mode perhaps?

I did not find how to do this.

Guess based on ChatGPT… Would this work? Potential alternative:

<network>
  <name>Whonix-Internal</name>
  <ip address='10.152.152.0' netmask='255.255.255.0'>
  </ip>
</network>

Increase KVM GW rams to 2 GB because on CLI first start there is not internet connect going to be (something doesnt run) , and 4 CPU (similar to vbox)

1 Like

Did some digging on the libvirt+QEMU networking implementation advantages which is a combination of virtual bridges ip/nfables routing rules vs using the emulated SLIRP that VBox inherited from QEMU and what UTM uses too. Another problem is that DNS and DHCP are bundled with SLIRP and there’s no clear way for me to disable either

SLIRP is an emulated device meaning that while the packets are hidden inside it as far as the host is concerned, it has performance bottlenecks and some serious security issues in the past.

CVE-2019-6778, which is an overflow in QEMU’s SLIRP implementation

With that said, I don’t think the security and performance sacrifices are justified in order to enable some corner usecases. I can allude to it as a potential solution in the docs but I don;t think I will go further than that to encourage or allow its usage.


There’s a modern implementation that does something similar called passt from https://passt.top - It cannot support all features of an IP stack and its integration with host sockets and syscalls make me uncomfortable in considering it as an alternative.

Please tell me where I can move posts related to this discussion so I can easily reference them in the future should this be asked about again.

Did you try to remove the <bridge name...> interfaces?


As far as I can look that up, it seems SLIRP is only used by VirtualBox NAT. Not by VirtualBox internal network.

Meaning, perhaps SLIRP is avoidable since we don’t want NAT for the internal network anyhow.

Apparently, VirtualBox uses a virtual network switch for the virtual internal network. Not SLIRP.

It seems there is a missing feature in KVM. It would be good if we could pinpoint to a missing KVM feature through a feature request, bug report, or absent of ask on the KVM mailing list about this. Let’s draft a question for the KVM mailing list.

title:

KVM static internal networking without host bridge interface (virbr)

text:

How to set up an internal network between two KVM network interfaces while using static networking (avoiding dnsmasq) and while avoiding a host bridge interface (virbr)?

Currently I am using this for the network.

<network>
  <name>Internal</name>
  <bridge name='virbr2' stp='on' delay='0'/>
</network>

And then for the VM.

    <interface type='network'>
      <source network='Internal'/>
      <model type='virtio'/>
      <driver name='qemu'/>
    </interface>

* I would like to avoid the host `virbr2` interface. This is because ideally package sniffers on the host such as tshark / wireshark would be unable to see these packages following between an internal network between two VMs.
* SLIRP should be avoided due to past security issues. [1]
* dnsmasq on the host operating system or inside the VMs should also be avoided in favor of static IP addresses.

By comparison, this is possible. [2]

Is that possible with KVM too? Could you please show an example configuration file on how to accomplish that?

[1] CVE-2019-6778
[2] VirtualBox has this capability. VirtualBox can have an internal network using static networking. No vibr bridge interfaces can be seen on the host operating system. And VM to VM internal traffic is not visible to package analyzers on the host operating system either.

(No bold uses. Not sure why the forums uses bold for some words.)

Dev/KVM - Whonix seems right for developer documention.

Mentions on Whonix for KVM

@HulaHoop would you like me to build and upload KVM builds? its easy from my side if thats ok with you.

1 Like