Proxmox - A dedicated KVM platform for Whonix?

apparmor module is loaded.
5 profiles are loaded.
5 profiles are in enforce mode.
/usr/bin/lxc-start
/usr/sbin/mysqld
lxc-container-default
lxc-container-default-with-mounting
lxc-container-default-with-nesting
0 profiles are in complain mode.
3 processes have profiles defined.
3 processes are in enforce mode.
/usr/bin/lxc-start (2938)
/usr/sbin/mysqld (1813)
/usr/sbin/mysqld (5403)
0 processes are in complain mode.
0 processes are unconfined but have a profile defined.

Nope not confined.

OK thanks, I’ll report this to Proxmox to see whether they want to add this in a future release.

I expect that they might say that it’s overkill for KVM, because KVM is already its own container and apparmor isn’t as necessary as it is for LXC—except in the case where there is a bug in KVM which enables the user to escalate their privileges and break out of their machine to inspect the host’s filesystem or memory.

Is this accurate, or is there an additional reason to have apparmor?

1 Like

Each layer of defense counts. Without apparmor the host is more vulnerable.

Well, that’s how I have it and it is working for me. But I wouldn’t swear it’s the best configuration. The only difference is that in my interfaces file I have lines “network 10.152.128.0” and “broadcast 10.152.191.255”. Try adding those.

I have no idea what you are doing here… But…

	post-up echo 1 > /proc/sys/net/ipv4/ip_forward

I discourage using IP forwarding. The beauty of Whonix is, that we don’t
need IP forwarding and therefore have a lot lower risk for leaks.

So I can tell you what I have in case it helps. On my Whonix gateway I have this:

auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
auto eth1
iface eth1 inet static
address 10.152.152.10
netmask 255.255.192.0

I already showed you what I have on my Whonix workstations.

My DHCP server is running on the Proxmox host, which has a direct connection to the Internet (more or less), and its network is a bit more complicated, but here is part of its interfaces file:

auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
iface eth1 inet manual
auto vmbr0
iface vmbr0 inet static
address 10.152.152.10
netmask 255.255.255.0
bridge_ports none
bridge_stp on
bridge_fd 0
auto vmbr1
iface vmbr1 inet static
address 192.168.1.2
netmask 255.255.255.0
bridge_ports eth1
bridge_stp off
bridge_fd 0

Do you have other (non-Whonix) Proxmox VMs that can access the Internet? If not, that would be the first thing to get straight before adding Whonix into the equation.

3 Likes

Is there any progress?

Today I will follow this steps and try it! :slight_smile:

1 Like

@terminus you have some update on this?

I’m no longer using Proxmox, sorry, so I can no longer help to debug.

Are you using other platform like proxmox to add whonix?

The only other I’ve used is Virtualbox, which is officially supported.

1 Like

Just to say, working in Proxmox server :slight_smile:

3 Likes

I am trying to start it there too. Could you please tell me, where to turn off the time syncing? While system checking in the workstation, I’ve got “kvm-clock tsc hpet acpi_pm detected”.

Unrelated to timesync.

How to turn of this check is already explained in the very message which mentions it. That however doesn’t fix the original reason for the check.

How to fix the root issue? Users won’t be able to fix this. No developers are working on this.

sdwdate Disable Autostart

Does it mean, that my Whonix Workstation will have a weakness? Or it’s safe to just turn the check off?

The warning is there on purpose, is correct, not a false-positive.

1 Like

So, after an year, full of events, I’m here again :slight_smile:
And even with solution for PVClock, if someone will need it:

  1. Edit or create /etc/pve/virtual-guest/cpu-models.conf, add there a new CPU type. This simple one will work:
cpu-model: pvclockoff
        flags -kvmclock;-tsc
  1. Then use this CPU in the VM CPU options.
  2. ACPI you can turn off in plain VM options.
  3. I also set to no the option Use local time for RTC.

Adding flags to qm.conf doesn’t work, you need to create a custom CPU type.
Documentation used:
https://pve.proxmox.com/wiki/Manual:_cpu-models.conf
Possible flags you can find via console: qemu-system-x86_64 -cpu help

I’m open to critique. I’m not a pro in virtualization, so I could easily overlook something.