[HOME] [DOWNLOAD] [DOCS] [NEWS] [SUPPORT] [TIPS] [ISSUES] [CONTRIBUTE] [DONATE]

Whonix-ws-16 fails to update due to timing issue

Whonix-ws-16 failed to update using the “Qubes update” utility with the following error:

Updating whonix-ws-16

Error on updating whonix-ws-16: Command '['sudo', 'qubesctl', '--skip-dom0', '--targets=whonix-ws-16', '--show-output', 'state.sls', 'update.qubes-vm']' returned non-zero exit status 20
whonix-ws-16:
  ----------
            ID: update
      Function: pkg.uptodate
        Result: False
       Comment: E: Release file for tor+https://fasttrack.debian.net/debian/dists/bullseye-fasttrack/InRelease is not valid yet (invalid for another 49s). Updates for this repository will not be applied.
       Started: 08:29:10.411910
      Duration: 5375.832 ms
       Changes:   
  ----------
            ID: notify-updates
      Function: cmd.run
          Name: /usr/lib/qubes/upgrades-status-notify
        Result: False
       Comment: Command "/usr/lib/qubes/upgrades-status-notify" run
       Started: 08:29:15.795184
      Duration: 4028.142 ms
       Changes:   
                ----------
                pid:
                    1806
                retcode:
                    100
                stderr:
                stdout:
  
  Summary for whonix-ws-16
  ------------
  Succeeded: 0 (changed=1)
  Failed:    2
  ------------
  Total states run:     2
  Total run time:   9.404 s
It then updated just fine when done via the terminal (whonix-ws-16 terminal)
user@host:~$ sudo apt update && sudo apt upgrade
Hit:1 tor+https://deb.debian.org/debian bullseye InRelease                     
Hit:2 https://deb.qubes-os.org/r4.0/vm bullseye InRelease                      
Hit:3 tor+https://deb.whonix.org bullseye InRelease                 
Get:4 tor+https://fasttrack.debian.net/debian bullseye-fasttrack InRelease [12.9 kB]
Hit:5 tor+https://deb.debian.org/debian bullseye-updates InRelease
Hit:6 tor+https://deb.debian.org/debian-security bullseye-security InRelease
Hit:7 tor+https://deb.debian.org/debian bullseye-backports InRelease
Fetched 12.9 kB in 2s (6,506 B/s)
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
6 packages can be upgraded. Run 'apt list --upgradable' to see them.
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
  qubes-core-agent qubes-core-agent-nautilus qubes-core-agent-networking
  qubes-core-agent-passwordless-root qubes-core-agent-qrexec
  qubes-core-agent-thunar
6 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 269 kB of archives.
After this operation, 0 B of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 https://deb.qubes-os.org/r4.0/vm bullseye/main amd64 qubes-core-agent-networking amd64 4.0.63-1+deb11u1 [37.0 kB]
Get:2 https://deb.qubes-os.org/r4.0/vm bullseye/main amd64 qubes-core-agent-qrexec amd64 4.0.63-1+deb11u1 [48.3 kB]
Get:3 https://deb.qubes-os.org/r4.0/vm bullseye/main amd64 qubes-core-agent-thunar amd64 4.0.63-1+deb11u1 [30.0 kB]
Get:4 https://deb.qubes-os.org/r4.0/vm bullseye/main amd64 qubes-core-agent-passwordless-root amd64 4.0.63-1+deb11u1 [33.0 kB]
Get:5 https://deb.qubes-os.org/r4.0/vm bullseye/main amd64 qubes-core-agent-nautilus amd64 4.0.63-1+deb11u1 [28.6 kB]
Get:6 https://deb.qubes-os.org/r4.0/vm bullseye/main amd64 qubes-core-agent amd64 4.0.63-1+deb11u1 [91.7 kB]
Fetched 269 kB in 2s (144 kB/s)        
(Reading database ... 95353 files and directories currently installed.)
Preparing to unpack .../0-qubes-core-agent-networking_4.0.63-1+deb11u1_amd64.deb
 ...
Unpacking qubes-core-agent-networking (4.0.63-1+deb11u1) over (4.0.62-1+deb11u1)
 ...
Preparing to unpack .../1-qubes-core-agent-qrexec_4.0.63-1+deb11u1_amd64.deb ...
Unpacking qubes-core-agent-qrexec (4.0.63-1+deb11u1) over (4.0.62-1+deb11u1) ...
Preparing to unpack .../2-qubes-core-agent-thunar_4.0.63-1+deb11u1_amd64.deb ...
Unpacking qubes-core-agent-thunar (4.0.63-1+deb11u1) over (4.0.62-1+deb11u1) ...
Preparing to unpack .../3-qubes-core-agent-passwordless-root_4.0.63-1+deb11u1_am
d64.deb ...
Unpacking qubes-core-agent-passwordless-root (4.0.63-1+deb11u1) over (4.0.62-1+d
eb11u1) ...
Preparing to unpack .../4-qubes-core-agent-nautilus_4.0.63-1+deb11u1_amd64.deb .
..
Unpacking qubes-core-agent-nautilus (4.0.63-1+deb11u1) over (4.0.62-1+deb11u1) .
..
Preparing to unpack .../5-qubes-core-agent_4.0.63-1+deb11u1_amd64.deb ...
Unpacking qubes-core-agent (4.0.63-1+deb11u1) over (4.0.62-1+deb11u1) ...
Setting up qubes-core-agent-qrexec (4.0.63-1+deb11u1) ...
Setting up qubes-core-agent-passwordless-root (4.0.63-1+deb11u1) ...
Setting up qubes-core-agent (4.0.63-1+deb11u1) ...
Installing new version of config file /etc/qubes-rpc/qubes.Restore ...
Removed /etc/systemd/system/sysinit.target.wants/haveged.service.
Removed /etc/systemd/system/multi-user.target.wants/haveged.service.
Created symlink /etc/systemd/system/sysinit.target.wants/haveged.service → /lib/
systemd/system/haveged.service.
Leaving 'diversion of /etc/init/plymouth-shutdown.conf to /etc/init/plymouth-shu
tdown.conf.qubes-disabled by qubes-core-agent'
Leaving 'diversion of /etc/init/prefdm.conf to /etc/init/prefdm.conf.qubes-disab
led by qubes-core-agent'
Leaving 'diversion of /etc/init/splash-manager.conf to /etc/init/splash-manager.
conf.qubes-disabled by qubes-core-agent'
Leaving 'diversion of /etc/init/start-ttys.conf to /etc/init/start-ttys.conf.qub
es-disabled by qubes-core-agent'
Leaving 'diversion of /etc/init/tty.conf to /etc/init/tty.conf.qubes-disabled by
 qubes-core-agent'
Leaving 'diversion of /etc/init/serial.conf to /etc/init/serial.conf.qubes-orig 
by qubes-core-agent'
qubes-sync-time.service is a disabled or a static unit, not starting it.
qubes-update-check.service is a disabled or a static unit, not starting it.
Removing obsolete conffile /etc/systemd/system/haveged.service ...
Setting up qubes-core-agent-nautilus (4.0.63-1+deb11u1) ...
Setting up qubes-core-agent-thunar (4.0.63-1+deb11u1) ...
Processing triggers for mailcap (3.69) ...
Processing triggers for desktop-file-utils (0.26-1) ...
Processing triggers for libglib2.0-0:amd64 (2.66.8-1) ...
Processing triggers for man-db (2.9.4-2) ...
Setting up qubes-core-agent-networking (4.0.63-1+deb11u1) ...
qubes-network.service is a disabled or a static unit, not starting it.
user@host:~$ 

If this message is transient, it can be safely ignored. Try again. Probably working.

2 Likes

@Patrick : In my case the problem persists unless I change the time to the current CET value instead of the UTC value of the Whonix templates. - Please see my report to the Qubes-OS community.

Could you please provide your opinion on whether this should be discussed here. - TIA!

I haven’t experienced that bug. Cannot reproduce. But I must admit I use Standard Update Steps more than Qubes Update tool.

To debug this issue which I don’t know yet is caused by Qubes or Whonix… Try Standard Update Steps. Does that work?

Does Qubes Update tool do anything special related to time? Any idea? @marmarek


Note on:

Ambox warning pn.svg.png Quote Qubes OS “How to update” warning [archive]:

Updating with direct commands such as qubes-dom0-update, dnf update, and apt update is not recommended, since these bypass built-in Qubes OS update security measures. Instead, we strongly recommend using the Qubes Update tool or its command-line equivalents, […]. (By contrast, installing [archive] packages using direct package manager commands is fine.)

That generally makes sense but as far I know there are currently no such cases. Follow Qubes development to find out when that happens.

Also instead of not updating at all due to that issue and despite that warning, Standard Update Steps is seems still better to me than no update at all.

Could also be an issue with Debian’s fasttrack repository. To further debug this issue, Enable Debian Fasttrack Repository in Debian Template.

I hit this issue very often, not only for the fasttrack repo. It’s enough to unlucky enough to update just after debian updates the repo (which AFAIR happens every 6h). It is very annoying especially for openQA tests, where such failure makes the whole 3h+ job to fail. Recently (because of fasttrack?) it started to happen more often - to the point I got annoyed enough to apply a workaround by setting the clock 5 minutes forward in Whonix. This also breaks split-gpg tests (where generated test-key is “not yet valid”) and we have similar workaround there too.

Maybe you can adjust sdwdate to shift the clock only forward and not backward in templates? It does reduce its properties, but maybe for templates (which have limited network connectivity anyway) it isn’t an issue?

1 Like

That’s already the case.

sdwdate isn’t running in Qubes templates.

sudo systemctl status sdwdate

● sdwdate.service - Secure Distributed Web Date
     Loaded: loaded (/lib/systemd/system/sdwdate.service; enabled; vendor preset: enabled)
    Drop-In: /usr/lib/systemd/system/sdwdate.service.d
             └─20_arch_syscall_whitelist.conf, 40_qubes.conf
     Active: inactive (dead)
  Condition: start condition failed at Sat 2021-11-20 11:26:08 UTC; 3h 39min ago
             └─ ConditionPathExists=!/run/qubes/this-is-templatevm was not met
       Docs: https://www.whonix.org/wiki/sdwdate

Nov 20 11:26:08 host systemd[1]: Condition check resulted in Secure Distributed Web Date being skipped.

Generally, sdwdate as implemented now only sets time forward, never backwards.
It is even quite careful about it → sdwdate Time Replay Protection.
The idea is that forward should always be safe (or at least safer) but backwards could produce a mess […].
How/if time/sdwdate should be handled / could be improved in Qubes templates is another interesting question for a different forum thread.

Since sdwdate currently isn’t running inside Qubes templates, I don’t understand how Whonix/sdwdate could cause such an issue.

Non-Qubes-Whonix haven’t been reporting any APT sources no valid yet issues for a very long time. (Cannot recall.)

If this happens, in dom0, please run:

qvm-run --pass-io whonix-gw-16 "date --utc" && date --utc

Replace whonix-gw-16 with the template that is currently experiencing issues. That will show the time in dom0 and the time in the Qubes TemplateVM. Then of course it would be interesting to compare that time with other Qubes Templates.

The time difference should just be 1 or 2 seconds. It is for me. And even that minor time delay might be from the command execution delay or Boot Clock Randomization - Whonix. That minor difference isn’t the issue here.

Perhaps your dom0 clock is slow or fast?

Ok, then I need to investigate it in more details, because I have assumed sdwdate is the reason. Anyway, I have hit this issue in Whonix templates only and not on plain Debian templates, so it is somehow related.

1 Like

One difference comes to mind…

Non-Whonix Templates are using qubes-sync-time.service / qubes-sync-time.timer.

Whonix templates are not using it.

sudo systemctl status qubes-sync-time.service
● qubes-sync-time.service - Update time from ClockVM
     Loaded: loaded (/lib/systemd/system/qubes-sync-time.service; static)
    Drop-In: /usr/lib/systemd/system/qubes-sync-time.service.d
             └─40_qubes-whonix.conf
     Active: inactive (dead)
TriggeredBy: ● qubes-sync-time.timer
  Condition: start condition failed at Sat 2021-11-20 11:26:05 UTC; 3h 59min ago
             └─ ConditionPathExists=!/usr/lib/qubes-whonix was not met

Nov 20 11:26:05 host systemd[1]: Condition check resulted in Update time from ClockVM being skipped.

When Templates (or any Qubes VM) are started, they initially get their clock from dom0, which is probably a Xen virtualizer feature? That is maybe why for me the clock is correct.

Maybe the issue is happening with long running Qubes-Whonix Templates? Maybe the issue would happen generally with long running Qubes templates that are not using qubes-sync-time? That would also explain why I am not experiencing it since I don’t leave Templates long running.

What’s the purpose of qubes-sync-time? I couldn’t find any documentation/rationale for it. If there is, please let me know. Eager to read.

Is it because otherwise Xen DomU’s would start to clock drift more and more?

Generally, yes, that’s the reason. In practice it happens when you pause a VM, or in some cases (depending on domU kernel config) under high load. The service is triggered after system suspend and in similar occasions (in addition to a timer).

As for the time on domU boot, it indeed may be the case. Xen does provide some initial clock to domU, but I’m not sure if it’s synchronized properly in all the cases. It is a different clock than dom0 uses.

In case of tests, there is no long system uptime involved. The system is started just before performing updates. And since other templates do not have this issue, I think it’s pretty clear that at least dom0 clock is correct.

[user@dom0 ~]$ qvm-run -ap whonix-ws-16 'date --utc' && date --utc
Sat 20 Nov 2021 03:42:51 PM UTC
Sat 20 Nov 2021 03:43:24 PM UTC
[user@dom0 ~]$ qvm-run -ap debian-11 'date --utc' && date --utc
Sat 20 Nov 2021 03:44:58 PM UTC
Sat 20 Nov 2021 03:44:59 PM UTC

This is after fresh system start, and also templates are just started here.

1 Like

I have disabled (systemctl mask) qubes-sync-time service in debian-11, and still got correct time there (after template restart ofc).

2 Likes

Also suspend/resume time fix is disabled in Qubes-Whonix templates.
(sdwdate /usr/libexec/sdwdate/suspend-post)

Are you using suspend/resume? That might result in time in Qubes-Whonix Templates to become stale.

As a workaround, does it help to shutdown and restart Qubes-Whonix Templates?

Maybe Qubes-Whonix Templates should use qubes.GetRandomizedTime?


Reminds me of ⚓ T387 Qubes-Whonix-Gateway as ClockVM. I still didn’t get around implementing it since it’s kinda complex, not pretty and daunting to implement. More realistically might be using sdwdate inside the ClockVM (sys-net) or using a Kicksecure based sys-net which comes with sdwdate pre-installed.

All the above checks are just after startup, no suspend nor long uptime is involved.

1 Like

Looks good enough.
Please re-run in case this issue happens again. Or could you please add it to the Qubes Q/A scripts? Seems quite useful anyhow to have this information handy. Checking/comparing dom0/VM time before proceeding with any updates.

Yet another cause could be Debian indeed setting a wrong valid-from field.

When this issue is happening, could you please check this link/file?
https://fasttrack.debian.net/debian/dists/bullseye-fasttrack/Release

At time of writing the interesting fields are:

Date: Sat, 20 Nov 2021 15:30:10 UTC
Valid-Until: Sat, 27 Nov 2021 15:30:10 UTC

VM:

date --utc

Sat 20 Nov 2021 04:00:03 PM UTC

dom0:

date --utc

Sat Nov 20 16:00:29 UTC 2021

(Few seconds delay due to thinking, typing.)

The different time zones might be an issue. dom0 / Templates are “normal” but Qubes-Whonix templates are in UTC.

For Debian 11 / bullseye in sdwdate a change was required (and introduced since the first Debian 11 based Whonix version 16) to set in python:

os.environ["LC_TIME"] = "C"
os.environ["TZ"] = "UTC"
time.tzset()

This python change may need to be added to any Qubes source code. For example the python based Qubes dom0 file /etc/qubes-rpc/qubes.GetDate might need it.

Furthermore, to improve the robustness (and perhaps even fix this bug) any invocation of date should be prefixed with --utc, i.e. date --utc for any Qubes shell/bash scripts.

Internally, programmatically Qubes should always handle time in UTC. The time shown to the user when manually running date or looking at the systray can remain as is in user local timezone, no problme.

The Whonix template is ~half a minute in the past. If it’s close enough that the 5 minutes makes a difference (date -s +5min is enough to fix), then I can very well believe that 30s in the past may be problematic at times too. Something clearly is making Whonix’s clock be in the past, and I’d say we should avoid it at all regardless if that’s 30s or 5min.

1 Like

added to wiki just now…

Qubes Time Synchronization Features:

Xen DomU’s get their initial time from Xen (Qubes dom0) at VM start.

  • qvm-sync-clock.service
  • qvm-sync-clock.timer

qvm-sync-clock gets time from Qubes ClockVM.

It is used because otherwise Xen DomU’s would start to clock drift more and more. This this answer by marmarek [archive].

qvm-sync-clock is unwanted in sys-whonix and anon-whonix because sdwdate runs there.

qvm-sync-clock is disabled in Qubes-Whonix ™ Templates until version Qubes-Whonix ™ 16.0.3.7. To be re-considered for later versions. Qubes-Whonix ™ get their time from dom0 at VM startup, which is then randomized using Boot Clock Randomization.

Future: qvm-sync-clock should be equally safe to run inside Qubes-Whonix ™ Templates, if passed though clock-random-manual-cli.


Now working on making timesync in Qubes-Whonix Template similar to Non-Qubes Templates.


Time to revise:

New plan:
As for Qubes-Whonix Templates, make it similar to /etc/qubes-rpc/qubes.SetDateTime.

[Imprint] [Privacy Policy] [Cookie Policy] [Terms of Use] [E-Sign Consent] [DMCA] [Contributors] [Investors] [Priority Support] [Professional Support]