[Help Welcome] KVM Development - staying the course

Did some digging on the libvirt+QEMU networking implementation advantages which is a combination of virtual bridges ip/nfables routing rules vs using the emulated SLIRP that VBox inherited from QEMU and what UTM uses too. Another problem is that DNS and DHCP are bundled with SLIRP and there’s no clear way for me to disable either

SLIRP is an emulated device meaning that while the packets are hidden inside it as far as the host is concerned, it has performance bottlenecks and some serious security issues in the past.

CVE-2019-6778, which is an overflow in QEMU’s SLIRP implementation

With that said, I don’t think the security and performance sacrifices are justified in order to enable some corner usecases. I can allude to it as a potential solution in the docs but I don;t think I will go further than that to encourage or allow its usage.


There’s a modern implementation that does something similar called passt from https://passt.top - It cannot support all features of an IP stack and its integration with host sockets and syscalls make me uncomfortable in considering it as an alternative.

Please tell me where I can move posts related to this discussion so I can easily reference them in the future should this be asked about again.

Did you try to remove the <bridge name...> interfaces?


As far as I can look that up, it seems SLIRP is only used by VirtualBox NAT. Not by VirtualBox internal network.

Meaning, perhaps SLIRP is avoidable since we don’t want NAT for the internal network anyhow.

Apparently, VirtualBox uses a virtual network switch for the virtual internal network. Not SLIRP.

It seems there is a missing feature in KVM. It would be good if we could pinpoint to a missing KVM feature through a feature request, bug report, or absent of ask on the KVM mailing list about this. Let’s draft a question for the KVM mailing list.

title:

KVM static internal networking without host bridge interface (virbr)

text:

How to set up an internal network between two KVM network interfaces while using static networking (avoiding dnsmasq) and while avoiding a host bridge interface (virbr)?

Currently I am using this for the network.

<network>
  <name>Internal</name>
  <bridge name='virbr2' stp='on' delay='0'/>
</network>

And then for the VM.

    <interface type='network'>
      <source network='Internal'/>
      <model type='virtio'/>
      <driver name='qemu'/>
    </interface>

* I would like to avoid the host `virbr2` interface. This is because ideally package sniffers on the host such as tshark / wireshark would be unable to see these packages following between an internal network between two VMs.
* SLIRP should be avoided due to past security issues. [1]
* dnsmasq on the host operating system or inside the VMs should also be avoided in favor of static IP addresses.

By comparison, this is possible. [2]

Is that possible with KVM too? Could you please show an example configuration file on how to accomplish that?

[1] CVE-2019-6778
[2] VirtualBox has this capability. VirtualBox can have an internal network using static networking. No vibr bridge interfaces can be seen on the host operating system. And VM to VM internal traffic is not visible to package analyzers on the host operating system either.

(No bold uses. Not sure why the forums uses bold for some words.)

Dev/KVM - Whonix seems right for developer documention.

Mentions on Whonix for KVM

@HulaHoop would you like me to build and upload KVM builds? its easy from my side if thats ok with you.

1 Like

No because if I remove virbr0 that’s bidding farewell to the default NAT network, virbr1 - whonix external and virbr2 - internal network. I don’t want to gut my setup and then struggle to restore these settings again. If @nurmagoz is adventurous enough, he could try experimenting? But before doing anything let’s delve into the supposed differences between virtualizers and what is essentially the problem at hand here.

Bridges and switches are functionally the same thing and it’s just playing semantic games at this point.

Oracle is just implementing/configuring this using their custom driver:

https://docs.oracle.com/en/virtualization/virtualbox/6.0/user/network_internal.html

, the Oracle VM VirtualBox support driver will automatically wire the cards and act as a network switch. The Oracle VM VirtualBox support driver implements a complete Ethernet switch and supports both broadcast/multicast frames and promiscuous mode.

Believe me there’s no missing features in KVM compared to any general hypervisor these days. It absolutely buries the competition as far as functionality and feature set is concerned.


The problem corridor users are running into is that corridor fails to distinguish between virtual bridge traffic and host traffic right?

Wouldn’t commonsense dictate that it somehow gets coded into ignoring certain interface traffic?

Conceptually as far as I can see, VPNs on the linux host are running layer 2 virtual devices which are conflicting with KVM devices because they are on the same footing.

A cursory search shows that the KVM virtual bridges need to be configured to allow tunneling their traffic through a host VPN so they cooperate.

Thanks for the offer, but I’m working on it. It’s not out of laziness.

It’s a missing KVM feature or we don’t know how to do the same with KVM.

  • VirtualBox: corridor, host VPNs fail closed mechanisms work out of the box.
    • Much simpler, no need to rely on any upstreams to change their code to suit KVM.
  • KVM: broken
    • With KVM, it would be required to explain this issue to all upstreams and they would need to add code specifically handling this. Unrealistic.

If not possible with KVM, this comparative feature reduction seems clear that this can be called a missing feature.

Currently there is no main/deidcated maintainer for Whonix KVM.

The upstream build script and configuration files are fully functional. Despite this reality, the setup and instructions are still up-kept by the community

The original maintainer @HulaHoop has taken on a lesser role as a KVM contributor.

1 Like

Yes and libvirt regenerates it upon config file saving.

I also tried different network types like connecting directly to the physical device and a new routing mode called “open” that didn’t apply any traffic or routing rules locally via iptables and these were both causing the VM internet connection to be inoperable.


Attempted to try running libvirt without dnsmasq again this time following online advice that suggested disabling the “default” network and unselecting its autostart at boot option. Result: libvirt still crashes and cannot connect to the machines after restarting the process to apply these changes.

dnsmasq is currently an indispensable dependency.


I’m just publishing the findings in order to save users/testers time trying out things that won’t work. Discussions or proposals to upstream is a different matter.

1 Like

Has anyone been able to diagnose exactly why corridor refuses to start on Linux hosts? That was the problem when I tried it years ago. Without knowing exactly why, we can’t know what component is responsible for the conflict or which project to contact and make specific technical suggestions to.

1 Like

Merged.

Untested!

Going back to /dev/random is a de-grade. It needlessly limits guest entropy while the underlying problems have been solved by upstream developments to the RNG subsystem and/or with having jitter-entropy installed on the host.

1 Like

Both /dev/random and /dev/urandom are only fixed, always secure in some specific kernel version number (which I don’t have handy). Until everyone is on such a kernel /dev/random is blocking but always the secure choice. /dev/urandom is more risky with older kernels. (I am not sure “old” here means Debian bookwork kernel version.)

virtio as far as I understand means to passthrough /dev/random from the host to the VM. I.e. the VM’s /dev/random will be based on virtio which uses the host’s /dev/random.

Since VMs come with jitter-entropy installed by default, even a slow /dev/random would not be an issue. So in practice, I cannot imagine any slow random issues. (Slow isn’t insecure at least.)

Even with /dev/urandom the risk should be low because even with ancient kernels by the time the VM gets started, /dev/urandom should be properly seeded already and not spew out predictable randomness. Yet, /dev/random seems the more secure choice.

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

Beginning from version 17.2.0.1, KVM image builds for Kicksecure / Whonix may also be created by Patrick, project founder and leader.

Key fingerprint = 916B 8D99 C38E AF5E 8ADC  7A2A 8D66 066A 2EEA CCDA
-----BEGIN PGP SIGNATURE-----

iQKTBAEBCgB9FiEEZvRiRskAcH/xDcHk6yfS+M7kGswFAmaYtqFfFIAAAAAALgAo
aXNzdWVyLWZwckBub3RhdGlvbnMub3BlbnBncC5maWZ0aGhvcnNlbWFuLm5ldDY2
RjQ2MjQ2QzkwMDcwN0ZGMTBEQzFFNEVCMjdEMkY4Q0VFNDFBQ0MACgkQ6yfS+M7k
GsxgNQ//RRwys7dxxVcfiXq2YlpYd7A/bCT+vZEgah/aC4pTT5UMDjDCYYFu0TZb
YSWIaV1YLDyc+wnqzcrnpc3P1z/D2oP9ErL2GBUPBBYnhVtL2vgTtP4DgzCGZMyB
2kVOZYkGANHi1jlOuR2D26w8IQv4/tvoIN6G/9ySSrJILB8gsA1jiaqMzge3m1BC
d/7hg/fyn25iqMDnBYb0cSM99IY3Innsfd2w8wOEdsu4WXXBdl+2PprAfvwu0Qeg
fv4G6LOmv1B8847QXdk71oxS9qpMzjaNq6g8uLx9fbjBobfk+9jbhdarzIKvf1WL
mUGu7dHww7vH+ovc9678/+g7ia9e6c+49gciQqCyuOX1NqqMqw7sRhumrzu4to+v
zDYv2bCNzmWAANsWSZtv53OcYl1EcitWb+j1738/OTvaWVyMmVrdm+G9TPk2x0a1
zaZigmXcZVXqA6J4Tn3wf8SqFPaNXOcXzmNk89SwbFL6zzODbUid31zw45xjN2ej
Ykt4VvgYuKSCnX98OJRb5LfqbaKsg+44u2d6X2J7S7/jnlbRTrYtx2CtaSbYYKcj
Kkz258iq3a4Fvog6sHtFagIrlXwhNdAeiwEF/h6pi99qMIhWguN7sSYlhIL2JZ2O
Swme97TNt90yvgUy7b/PSJSRXEAH3CJDVfgWHa0iXsf2ZE4uWw0=
=CdMM
-----END PGP SIGNATURE-----

KVM static internal networking without host bridge interface (virbr) - Users - Libvirt List Archives

FIxed, as in the quality of their entropy is now equal but the blocking behavior which can impact the performance of crypto apps is still very much prevalent AFAIK. The guest kernel version is a moot point since the entropy is being directly injected from the host. By using urandom you make sure that it is very hard for a rogue app in the guest to DoS the RNG interface.

Technically passthrough means giving direct unfettered control of hardware/devices on the host which isn’t accurate. Virtio allows faster communication by not emulating access and by securely implementing a more direct comm mechanism between guest and host. In the example you brought up, the guest random and urandom are both being fed from the host random.

Ancient kernels on the host? Unlikely and they would have bigger security problems. The guest? Immaterial since the source of the entropy is the improved RNG in more recent kernels and jitter-rng acts as a last line of defense in case someone is so inebriated as to run both an old host and guest. Also, the switch from random to urandom was done by upstream for any default new VM template.

Quote 3D Graphics Acceleration:

Not yet functional as of Debian buster but this has been fixed upstream. Future enhancements for performance and security are planned. Will revisit in Bullseye.

This needs an update.

1 Like

By passthrough I did not mean hardware access. Just in the conventional
sense “passed thorough from host to VM” (even if moderated through virtio).

HulaHoop via Whonix Forum:

FIxed, as in the quality of their entropy is now equal but the blocking behavior which can impact the performance of crypto apps is still very much prevalent AFAIK. The guest kernel version is a moot point since the entropy is being directly injected from the host.

The /dev/random page (was updated) still saying:

/dev/random is suitable for applications that need high quality
randomness, and can afford indeterminate delays.

And.

When read during early boot time, /dev/urandom may return data prior
to the entropy pool being initialized. If this is of concern in your
application, use getrandom(2) or /dev/random instead.

So /dev/urandom is still more risky than /dev/random.

By using urandom you make sure that it is very hard for a rogue app
in the guest to DoS the RNG interface.

The threat model being, the VM running malware exhausting the host’s
/dev/random?

Is this possible nowadays?

I think, unfortunately there might be many ways for a VM to DOS the host
operating system. Reference:

That topic could certainly use stress testing.

The compromise here might be the VM having lower quality entropy versus
protecting the host from /dev/random overuse.

Can virtio moderate this?

Ancient kernels on the host? Unlikely and they would have bigger security problems.

The guest? Immaterial since the source of the entropy is the improved RNG in more recent kernels and jitter-rng acts as a last line of defense in case someone is so inebriated as to run both an old host and guest.

As many good sources of entropy, the better. As good as jitter-rng might
be, /dev/random might still be higher quality than /dev/urandom. So best
to have two high quality sources of entropy and not one only 1.

Also, the switch from random to urandom was done by upstream for any default new VM template.

It might be the case that upstream is not be diving as deep into the
entropy topic.

Maybe a useful add: According to Thomas Hühn which he wrote “Myths about /dev/urandom” he said:

“Good news: the separation between /dev/urandom and /dev/random is practically gone.”

So this need to be checked with the latest kernel updates.