[HOME] [DOWNLOAD] [DOCS] [NEWS] [SUPPORT] [TIPS] [ISSUES] [DONATE]

CPU usage fluctuations while Whonix-Workstation is running

Whonix™ version and platform: Whonix-Workstation 15.0.1.3.4/15.0.1.3.9, Qubes 4.0.3
Affected component(s) or functionality: Whonix-Workstation
Steps to reproduce the behavior: Have one or more Whonix-Workstation AppVMs running while not actively using any program other than the bare system
Expected behavior: No noticeable increase in CPU usage over running the bare Qubes and its system VMs
Actual behavior: Minor but persistent fluctuations, CPU usage constantly goes up and down (about 1-2% reported in the Qubes widget)
Context: Noticed because the fluctuations constantly cause the fan to spin faster and then slower again, which is very annoying. Hopefully an increase in power usage is negligible, but bad if present at all. The issue does not seem present with AppVMs based on other templates.

I am not sure if it is appropriate to report this as a bug. I would appreciate any help or suggestions how to diagnose the issue. Initially I thought it may be a problem with Qubes itself, so I have tested it with several AppVMs based on Debian 10 minimal, the issue is not present no matter how many of these VMs are running. The issue is also not present when only Whonix-Gateway is running, so I have to conclude it is somehow related to a package in Whonix-Workstation or the Whonix configuration. Attempting to monitor CPU usage from within the VM did not give me any useful information, please suggest a better method if possible.

What did you do to monitor the usage within the VM? Whonix-WS runs whonixcheck and sdwdate and there are other processes and exchanges with Whonix-GW. I guess that would be what is happening to you.

Comparing Whonix-WS to Debian-minimal might not be a fair comparison. :slight_smile:

1 Like

Sorry for the late reply.

I temporarily installed gnome-system-monitor package in an AppVM based on Whonix-WS template and used that, couldn’t think of much else.

It is unlikely either whonixcheck or sdwdate is causing it, as they only perform tasks on VM startup or after a period of time, I am experiencing CPU usage going up and down again for a few seconds for the entire time the AppVM is running.

Some additional information:
I tried uninstalling all not strictly needed Whonix (and Qubes-Whonix) packages to see which one of them might be causing it, but even that did not resolve the problem. Now that I remember, past versions of Qubes-Whonix did not have this problem, I think I first started noticing it in the versions that include informing Whonix-Gateway’s sdwdate-gui of completed time synchronization in Workstation and related functionality.

An unrelated oddity is that setting a Whonix-Workstation AppVM’s NetVM to none has it inform sdwdate-gui that time synchronisation is broken when started (because no connection). Starting such a VM also force-starts Whonix-Gateway if it is not already running, despite it not being the Workstation’s NetVM. If Whonix-Gateway is shut down while it is running, shutting down the Whonix-Workstation AppVMs force-starts it as well.

So, which package is that, or what configuration is responsible for this behavior? Probably not any in Whonix-Workstation, because somehow Whonix-Gateway is aware of Workstations that do not use it as NetVM.

No worries. :slight_smile:

You can use top in cli to view processes without installing anything. Use it with sudo permissions to see processes from all users.

I checked and those were the only processes I saw that were using any cpu. I am not sure otherwise.

I noticed that too. Check that your Whonix-WS AppVM setting for “Default DispVM template” is set to ‘none’. However, something else starts sys-whonix anyway, I do not know what it is.

Do you have an extra hard drive around that you could test if this issue persists on a fresh install?
Also do you see any abnormal behavior from any processes when running a lsof command? It can be messy in its raw form, but it is also helpful to me for seeing processes not functioning as they should, as well as to track down the parent processes attached to them to see if they are causing an issue.
I’m sure there is a better command to accomplish a cleaner readout of this, but I’ve used that method for a long time, and have never been let down.

[Imprint] [Privacy Policy] [Cookie Policy] [Terms of Use] [E-Sign Consent] [DMCA] [Investors] [Priority Support] [Professional Support]