bug: restart of sdwdate in Whonix-Gateway causes 100% CPU use of Tor Browser in Whonix-Workstation

I suspect it is this:

  • Tor Browser is running on one machine and talking to Tor on another machine.
  • When time is changed using date on another machine, Tor Browser starts to use 100% CPU.

Still not “really” a Whonix issue. Also not “really” an sdwdate issue. It still needs to be reproduced without mentioning Whonix / sdwdate as a general bug and then reported against Tor Project trac component Tor Browser.

Help welcome.

Originally reported here with lots of discussion:
High CPU load in whonix-ws-14 AppVM & DispVM - related to notifications? · Issue #4969 · QubesOS/qubes-issues · GitHub

2 Likes

Tor Browser under Whonix appears to consume high amounts of CPU after resuming from suspend. I’m on Qubes 4.0 with Whonix 15 and Tor Browser 9.5.1 from Tor-Browser-Downloader. This has been going on for about a year, since maybe TB 9.1 or something. It doesn’t happen every time; it’s much more likely when there are more than a couple of tabs open, or when the browser has been open for a long time. It happens often even with JS disabled. It doesn’t affect all instances at the same time; if I resume with two Tor Browsers open in two VMs, sometimes only one of them will have high CPU usage. Sometimes it is only about 50% (as if single threaded), but usually it is 100%; all VMs have 2 vCPUs assigned. Closing and reopening the browser brings it back to normal CPU usage, but the new identity button does not. Usually neither the browser nor other apps seem noticeably slow when this happens, unless I cap the CPU usage with Xen. It does make the fan run at full speed after a while though.

This does not happen with Firefox from the Fedora 30/31/32 templates, nor Firefox ESR from the Debian 10 template!

As I recall, it also did not seem to affect a vanilla Tor Browser bundle in a Debian 10 VM last time I tested it, although I haven’t tested this very much because it’s not a normal use-case for me.

In other words, it appears to be Whonix-specific, from what I can tell at this time.

If I attach with strace, this is what I get:
strace: Process 2058 attached
strace: [ Process PID=2058 runs in x32 mode. ]
strace: [ Process PID=2058 runs in 64 bit mode. ]
clock_gettime(CLOCK_MONOTONIC, {tv_sec=74149, tv_nsec=310089525}) = 0
write(22, “M”, 1) = 1
futex(0x7b7821789ce8, FUTEX_WAKE_PRIVATE, 1) = 1
gettimeofday({tv_sec=1595780096, tv_usec=356508}, NULL) = 0
clock_gettime(CLOCK_MONOTONIC, {tv_sec=74149, tv_nsec=311125577}) = 0
clock_gettime(CLOCK_MONOTONIC, {tv_sec=74149, tv_nsec=311168045}) = 0
clock_gettime(CLOCK_MONOTONIC, {tv_sec=74149, tv_nsec=311196979}) = 0
clock_gettime(CLOCK_MONOTONIC, {tv_sec=74149, tv_nsec=311236352}) = 0
clock_gettime(CLOCK_MONOTONIC, {tv_sec=74149, tv_nsec=311263983}) = 0
clock_gettime(CLOCK_MONOTONIC, {tv_sec=74149, tv_nsec=311305851}) = 0
recvmsg(4, {msg_namelen=0}, 0) = -1 EAGAIN (Resource temporarily unavailable)
poll([{fd=4, events=POLLIN}, {fd=5, events=POLLIN}, {fd=33, events=POLLIN}, {fd=58, events=POLLIN}, {fd=93, events=POLLIN}], 5, 0) = 1 ([{fd=33, revents=POLLIN}])
clock_gettime(CLOCK_MONOTONIC, {tv_sec=74149, tv_nsec=311561976}) = 0
read(33, “\372”, 1) = 1
clock_gettime(CLOCK_MONOTONIC, {tv_sec=74149, tv_nsec=311637326}) = 0
clock_gettime(CLOCK_MONOTONIC, {tv_sec=74149, tv_nsec=311667442}) = 0
recvmsg(4, {msg_namelen=0}, 0) = -1 EAGAIN (Resource temporarily unavailable)
poll([{fd=4, events=POLLIN}, {fd=5, events=POLLIN}, {fd=33, events=POLLIN}, {fd=58, events=POLLIN}, {fd=93, events=POLLIN}], 5, 0) = 0 (Timeout)
clock_gettime(CLOCK_MONOTONIC, {tv_sec=74149, tv_nsec=311782775}) = 0
clock_gettime(CLOCK_MONOTONIC, {tv_sec=74149, tv_nsec=311813563}) = 0
clock_gettime(CLOCK_MONOTONIC, {tv_sec=74149, tv_nsec=311838449}) = 0
gettimeofday({tv_sec=1595780096, tv_usec=357368}, NULL) = 0
ioctl(67, FIONREAD, [0]) = 0
ioctl(67, FIONREAD, [0]) = 0
recvfrom(67, “”, 0, 0, NULL, NULL) = 0
clock_gettime(CLOCK_MONOTONIC, {tv_sec=74149, tv_nsec=313367252}) = 0
write(22, “M”, 1) = 1
futex(0x7b7821789ce8, FUTEX_WAKE_PRIVATE, 1) = 1
gettimeofday({tv_sec=1595780096, tv_usec=359566}, NULL) = 0
clock_gettime(CLOCK_MONOTONIC, {tv_sec=74149, tv_nsec=314408674}) = 0
clock_gettime(CLOCK_MONOTONIC, {tv_sec=74149, tv_nsec=314687501}) = 0
clock_gettime(CLOCK_MONOTONIC, {tv_sec=74149, tv_nsec=314899254}) = 0
clock_gettime(CLOCK_MONOTONIC, {tv_sec=74149, tv_nsec=315282810}) = 0
clock_gettime(CLOCK_MONOTONIC, {tv_sec=74149, tv_nsec=315847167}) = 0
clock_gettime(CLOCK_MONOTONIC, {tv_sec=74149, tv_nsec=316099525}) = 0
And it just repeats, as fast as the terminal can print.

This is the /proc/2058/fd directory:
0 -> /dev/null
1 -> /dev/null
2 -> /dev/null
3 -> anon_inode:inotify
4 -> socket:[22530]
5 -> anon_inode:[eventfd]
6 -> socket:[194424]
7 -> /home/user/.tb/tor-browser/Browser/TorBrowser/Data/Browser/profile.default/.parentlock
8 -> anon_inode:[eventpoll]
9 -> pipe:[22534]
10 -> pipe:[22534]
11 -> pipe:[22535]
12 -> pipe:[22535]
13 -> /home/user/.tb/tor-browser/Browser/omni.ja
14 -> /home/user/.tb/tor-browser/Browser/browser/omni.ja
15 -> pipe:[22541]
16 -> pipe:[22541]
17 -> pipe:[21811]
18 -> pipe:[21811]
19 -> socket:[22542]
20 -> /dev/urandom
21 -> pipe:[22544]
22 -> pipe:[22544]
23 -> anon_inode:[eventpoll]
24 -> pipe:[21812]
25 -> pipe:[21812]
26 -> anon_inode:[eventpoll]
27 -> pipe:[22545]
28 -> pipe:[22545]
29 -> anon_inode:[eventfd]
30 -> socket:[21827]
31 -> anon_inode:[eventfd]
32 -> /dev/shm/org.chromium.6SweMh (deleted)
33 -> pipe:[21830]
34 -> pipe:[21830]
35 -> /dev/shm/org.chromium.cmk7u8 (deleted)
36 -> /dev/shm/org.chromium.WdgMgF (deleted)
37 -> /home/user/.tb/tor-browser/Browser/TorBrowser/Data/Browser/profile.default/startupCache/startupCache.8.little
38 -> socket:[196242]
39 -> /home/user/.tb/tor-browser/Browser/TorBrowser/Data/Browser/profile.default/cookies.sqlite
40 -> /dev/shm/org.chromium.Slusad (deleted)
41 -> /dev/shm/org.chromium.SOT9qZ (deleted)
42 -> /dev/shm/org.chromium.WbMSHL (deleted)
43 -> /dev/shm/org.chromium.2BpDYx (deleted)
44 -> /dev/shm/org.chromium.ytFpfk (deleted)
45 -> /dev/shm/org.chromium.EbCdw6 (deleted)
46 -> /dev/shm/org.chromium.GpGaNS (deleted)
47 -> /dev/shm/org.chromium.CwDe4E (deleted)
48 -> /dev/shm/org.chromium.4v4klr (deleted)
49 -> /dev/shm/org.chromium.y69sCd (deleted)
50 -> /dev/shm/org.chromium.cCBCTZ (deleted)
51 -> /dev/shm/org.chromium.A5qPaM (deleted)
52 -> /dev/shm/org.chromium.0dI4ry (deleted)
53 -> /home/user/.tb/tor-browser/Browser/TorBrowser/Data/Browser/profile.default/places.sqlite
54 -> /dev/shm/org.chromium.mxFe3j (deleted)
55 -> /home/user/.tb/tor-browser/Browser/TorBrowser/Data/Browser/profile.default/favicons.sqlite
56 -> /home/user/.tb/tor-browser/Browser/TorBrowser/Data/Browser/profile.default/favicons.sqlite-wal
57 -> /home/user/.tb/tor-browser/Browser/TorBrowser/Data/Browser/profile.default/places.sqlite-wal
58 -> socket:[21867]
59 -> /home/user/.tb/tor-browser/Browser/TorBrowser/Data/Browser/profile.default/webappsstore.sqlite
60 -> socket:[196247]
61 -> /dev/shm/org.chromium.GEXWcH (deleted)
62 -> /home/user/.tb/tor-browser/Browser/TorBrowser/Data/Browser/profile.default/places.sqlite
63 -> /home/user/.tb/tor-browser/Browser/TorBrowser/Data/Browser/profile.default/places.sqlite-wal
64 -> /home/user/.tb/tor-browser/Browser/TorBrowser/Data/Browser/profile.default/favicons.sqlite
65 -> /home/user/.tb/tor-browser/Browser/TorBrowser/Data/Browser/profile.default/favicons.sqlite-wal
66 -> socket:[196249]
67 -> socket:[22275]
68 -> /home/user/.tb/tor-browser/Browser/TorBrowser/Data/Browser/profile.default/formhistory.sqlite
69 -> socket:[22240]
70 -> socket:[22650]
71 -> socket:[22651]
72 -> socket:[21991]
73 -> socket:[21993]
74 -> socket:[22654]
75 -> socket:[21995]
76 -> pipe:[21868]
77 -> socket:[22737]
78 -> /home/user/.tb/tor-browser/Browser/TorBrowser/Data/Browser/profile.default/content-prefs.sqlite
79 -> socket:[22739]
80 -> socket:[196251]
81 -> socket:[22741]
82 -> socket:[22749]
83 -> socket:[196056]
84 -> socket:[22747]
85 -> socket:[22748]
86 -> socket:[21987]
87 -> socket:[22751]
88 -> socket:[196661]
89 -> socket:[196998]
90 -> pipe:[22762]
91 -> /home/user/.tb/tor-browser/Browser/TorBrowser/Data/Browser/profile.default/webappsstore.sqlite-wal
92 -> pipe:[196058]
93 -> socket:[25379]
94 -> socket:[196662]
95 -> /home/user/.tb/tor-browser/Browser/TorBrowser/Data/Browser/profile.default/storage-sync.sqlite
96 -> socket:[196652]
97 -> /home/user/.tb/tor-browser/Browser/TorBrowser/Data/Browser/profile.default/extensions/{73a6fe31-595d-460b-a920-fcc0f8843232}.xpi
98 -> /home/user/.tb/tor-browser/Browser/TorBrowser/Data/Browser/profile.default/places.sqlite
99 -> socket:[196668]
100 -> socket:[22795]
101 -> socket:[196654]
102 -> socket:[196295]
103 -> /home/user/.tb/tor-browser/Browser/TorBrowser/Data/Browser/profile.default/places.sqlite
104 -> /home/user/.tb/tor-browser/Browser/TorBrowser/Data/Browser/profile.default/places.sqlite-wal
105 -> /home/user/.tb/tor-browser/Browser/TorBrowser/Data/Browser/profile.default/favicons.sqlite
106 -> /home/user/.tb/tor-browser/Browser/TorBrowser/Data/Browser/profile.default/favicons.sqlite-wal
107 -> socket:[201186]
108 -> socket:[196656]
109 -> socket:[23776]
110 -> socket:[196663]
111 -> socket:[196665]
112 -> socket:[196991]
113 -> socket:[196254]
114 -> socket:[197000]
115 -> pipe:[23237]
116 -> pipe:[196256]
117 -> socket:[197002]
118 -> socket:[197126]
119 -> /home/user/.tb/tor-browser/Browser/TorBrowser/Data/Browser/profile.default/favicons.sqlite
120 -> socket:[196278]
121 -> /home/user/.tb/tor-browser/Browser/TorBrowser/Data/Browser/profile.default/webappsstore.sqlite
122 -> /home/user/.tb/tor-browser/Browser/TorBrowser/Data/Browser/profile.default/webappsstore.sqlite-wal
123 -> socket:[197958]
124 -> socket:[196280]
125 -> socket:[197217]
126 -> socket:[196282]
127 -> socket:[23714]
128 -> socket:[196290]
129 -> socket:[197219]
130 -> pipe:[150985]
131 -> socket:[196288]
132 -> socket:[196289]
133 -> /home/user/.tb/tor-browser/Browser/TorBrowser/Data/Browser/profile.default/places.sqlite
134 -> socket:[196578]
135 -> /home/user/.tb/tor-browser/Browser/TorBrowser/Data/Browser/profile.default/places.sqlite-wal
136 -> socket:[196292]
137 -> socket:[197221]
138 -> pipe:[196580]
139 -> /home/user/.tb/tor-browser/Browser/TorBrowser/Data/Browser/profile.default/favicons.sqlite
140 -> socket:[197191]
141 -> socket:[151017]
142 -> /home/user/.tb/tor-browser/Browser/TorBrowser/Data/Browser/profile.default/favicons.sqlite-wal
143 -> socket:[197223]
144 -> socket:[197193]
145 -> /dev/shm/org.chromium.cmk7u8 (deleted)
146 -> socket:[194446]
147 -> socket:[197195]
148 -> /home/user/.tb/tor-browser/Browser/TorBrowser/Data/Browser/profile.default/cookies.sqlite-wal
149 -> socket:[197203]
151 -> socket:[197201]
152 -> socket:[197202]
153 -> pipe:[151356]
154 -> socket:[152581]
155 -> socket:[201181]
156 -> socket:[197205]
157 -> socket:[202288]
158 -> socket:[201188]
159 -> socket:[201175]
160 -> socket:[222114]
161 -> socket:[197211]
162 -> socket:[201190]
163 -> /home/user/.tb/tor-browser/Browser/.cache/event-sound-cache.tdb.b08dfa6083e7567a1921a715000001fb.x86_64-pc-linux-gnu
164 -> pipe:[201867]
165 -> socket:[202257]
166 -> socket:[212847]
167 -> socket:[202259]
169 -> pipe:[152692]
170 -> socket:[202261]
171 -> socket:[202269]
173 -> socket:[202267]
174 -> socket:[152811]
175 -> socket:[202268]
177 -> socket:[202271]

ss -x -p | grep 2058
u_str ESTAB 0 0 * 196254 * 196255 users:((“firefox.real”,pid=2058,fd=113))
u_str ESTAB 0 0 * 196280 * 196281 users:((“firefox.real”,pid=2058,fd=124))
u_str ESTAB 0 0 * 196056 * 196057 users:((“firefox.real”,pid=2058,fd=83))
u_str ESTAB 0 0 * 196278 * 196279 users:((“firefox.real”,pid=2058,fd=120))
u_str ESTAB 0 0 * 197191 * 197192 users:((“firefox.real”,pid=2058,fd=140))
u_str ESTAB 0 0 * 196292 * 196293 users:((“firefox.real”,pid=2058,fd=136))
u_seq ESTAB 0 0 * 151017 * 0 users:((“firefox.real”,pid=2058,fd=141))
u_str ESTAB 0 0 * 197193 * 197194 users:((“firefox.real”,pid=2058,fd=144))
u_seq ESTAB 0 0 * 152581 * 0 users:((“firefox.real”,pid=2058,fd=154))
u_seq ESTAB 0 0 * 196289 * 196288 users:((“file:// Content”,pid=2252,fd=24),(“firefox.real”,pid=2058,fd=132))
u_seq ESTAB 0 0 * 22650 * 22651 users:((“Web Content”,pid=3079,fd=5),(“Web Content”,pid=2302,fd=5),(“WebExtensions”,pid=2277,fd=5),(“file:// Content”,pid=2252,fd=5),(“Web Content”,pid=2220,fd=5),(“firefox.real”,pid=2058,fd=70))
u_str ESTAB 0 0 * 194424 * 194425 users:((“firefox.real”,pid=2058,fd=6))
u_str ESTAB 0 0 * 196656 * 196657 users:((“firefox.real”,pid=2058,fd=108))
u_str ESTAB 0 0 * 22240 * 22239 users:((“firefox.real”,pid=2058,fd=69))
u_str ESTAB 0 0 * 196282 * 196283 users:((“firefox.real”,pid=2058,fd=126))
u_seq ESTAB 0 0 * 196288 * 196289 users:((“firefox.real”,pid=2058,fd=131))
u_str ESTAB 0 0 * 196652 * 196653 users:((“firefox.real”,pid=2058,fd=96))
u_str ESTAB 0 0 * 196295 * 196294 users:((“firefox.real”,pid=2058,fd=102))
u_str ESTAB 0 0 * 25379 * 25380 users:((“firefox.real”,pid=2058,fd=93))
u_str ESTAB 0 0 * 194446 * 194447 users:((“firefox.real”,pid=2058,fd=146))
u_str ESTAB 0 0 * 196654 * 196655 users:((“firefox.real”,pid=2058,fd=101))
u_str ESTAB 0 0 * 196578 * 196579 users:((“firefox.real”,pid=2058,fd=134))
u_seq ESTAB 0 0 * 22795 * 0 users:((“firefox.real”,pid=2058,fd=100))
u_str ESTAB 0 0 * 22275 * 0 users:((“firefox.real”,pid=2058,fd=67))
u_str ESTAB 0 0 * 201175 * 201176 users:((“firefox.real”,pid=2058,fd=159))
u_str ESTAB 0 0 * 196290 * 196291 users:((“firefox.real”,pid=2058,fd=128))
u_str ESTAB 0 0 * 21867 * 22643 users:((“firefox.real”,pid=2058,fd=58))
u_str ESTAB 0 0 * 196663 * 196664 users:((“firefox.real”,pid=2058,fd=110))
u_seq ESTAB 0 0 * 197202 * 197201 users:((“Web Content”,pid=2302,fd=22),(“firefox.real”,pid=2058,fd=152))
u_str ESTAB 0 0 * 21827 * 21828 users:((“firefox.real”,pid=2058,fd=30))
u_str ESTAB 0 0 * 197195 * 197196 users:((“firefox.real”,pid=2058,fd=147))
u_str ESTAB 0 0 * 196668 * 196667 users:((“firefox.real”,pid=2058,fd=99))
u_str ESTAB 0 0 * 202288 * 202287 users:((“firefox.real”,pid=2058,fd=157))
u_seq ESTAB 0 0 * 197201 * 197202 users:((“firefox.real”,pid=2058,fd=151))
u_str ESTAB 0 0 * 202271 * 202272 users:((“firefox.real”,pid=2058,fd=177))
u_seq ESTAB 0 0 * 152811 * 0 users:((“firefox.real”,pid=2058,fd=174))
u_seq ESTAB 0 0 * 196662 * 196661 users:((“Web Content”,pid=2220,fd=26),(“firefox.real”,pid=2058,fd=94))
u_seq ESTAB 0 0 * 23714 * 0 users:((“firefox.real”,pid=2058,fd=127))
u_str ESTAB 0 0 * 233752 * 233753 users:((“firefox.real”,pid=2058,fd=160))
u_seq ESTAB 0 0 * 196661 * 196662 users:((“firefox.real”,pid=2058,fd=88))
u_str ESTAB 0 0 * 197203 * 197204 users:((“firefox.real”,pid=2058,fd=149))
u_str ESTAB 0 0 * 196665 * 196666 users:((“firefox.real”,pid=2058,fd=111))
u_str ESTAB 0 0 * 197205 * 197206 users:((“firefox.real”,pid=2058,fd=156))
u_str ESTAB 0 0 * 197217 * 197216 users:((“firefox.real”,pid=2058,fd=125))
u_str ESTAB 0 0 * 202257 * 202258 users:((“firefox.real”,pid=2058,fd=165))
u_str ESTAB 0 0 * 22654 * 22655 users:((“firefox.real”,pid=2058,fd=74))
u_str ESTAB 0 0 * 22749 * 22750 users:((“firefox.real”,pid=2058,fd=82))
u_seq ESTAB 0 0 * 202267 * 202268 users:((“firefox.real”,pid=2058,fd=173))
u_str ESTAB 0 0 * 22739 * 22740 users:((“firefox.real”,pid=2058,fd=79))
u_str ESTAB 0 0 * 22737 * 22738 users:((“firefox.real”,pid=2058,fd=77))
u_seq ESTAB 0 0 * 202268 * 202267 users:((“Web Content”,pid=3079,fd=22),(“firefox.real”,pid=2058,fd=175))
u_seq ESTAB 0 0 * 22747 * 22748 users:((“firefox.real”,pid=2058,fd=84))
u_str ESTAB 0 0 * 22741 * 22742 users:((“firefox.real”,pid=2058,fd=81))
u_seq ESTAB 0 0 * 22748 * 22747 users:((“WebExtensions”,pid=2277,fd=22),(“firefox.real”,pid=2058,fd=85))
u_str ESTAB 0 0 * 22751 * 22752 users:((“firefox.real”,pid=2058,fd=87))
u_str ESTAB 0 0 * 202259 * 202260 users:((“firefox.real”,pid=2058,fd=167))
u_str ESTAB 0 0 * 202261 * 202262 users:((“firefox.real”,pid=2058,fd=170))
u_str ESTAB 0 0 * 202269 * 202270 users:((“firefox.real”,pid=2058,fd=171))
u_seq ESTAB 0 0 * 22651 * 22650 users:((“firefox.real”,pid=2058,fd=71))
u_str ESTAB 0 0 * 22530 * 22531 users:((“firefox.real”,pid=2058,fd=4))

This is in the main process (firefox.real). Child processes (Web Extensions, Web Content, file:// Content, etc) continue to have normal CPU usage from what I can tell.

It appears to be clock-related, but sdwdate shows the VM is synced and nothing abnormal that I can see. Whonixcheck doesn’t show any warnings or errors.

I was wondering if anyone else has had this problem, or if there are any simple ways to debug this further? Any ideas are appreciated.

Edit: I forgot to mention, firefox’s internal task manager (about:performance) doesn’t show any high-impact processes, either.

Thanks for the info. Not sure how I missed that. I asked about it on qubes-users a long time ago but no one seemed to know what I was talking about.

I installed vanilla Tor Browser and sdwdate in a Debian 10 VM, and so far I am unable to reproduce it there. On the other hand, not suprisingly I am able to reproduce it with a vanilla Tor Browser in a whonix-ws VM.

In the meantime, is it actually safe to disable sdwdate in sys-whonix as suggested here? High CPU load in whonix-ws-14 AppVM & DispVM - related to notifications? · Issue #4969 · QubesOS/qubes-issues · GitHub

More debugging (nothing conclusive – don’t get your hopes up):

I just tried stracing a freshly started Tor Browser, and the output looks largely the same, except that every so often it will poll with an indefinite timeout. Something about sdwdate causes firefox to always poll with a zero timeout (see strace in previous post). I.e., the process is not being woken up externally, it’s just never going to sleep.

On the other hand, looking at the individual threads (‘H’ in top), it almost seems like some kind of livelock between the main thread and socket thread:

28192 user      20   0 2637920 204384 117264 R  43.7  10.0   2:27.87`- firefox.real            
28198 user      20   0 2637920 204384 117264 S   0.0  10.0   0:00.12`- Gecko_IOThread      
28199 user      20   0 2637920 204384 117264 S   0.0  10.0   0:00.01`- JS Watchdog         
28200 user      20   0 2637920 204384 117264 S   0.0  10.0   0:00.16`- JS Helper           
28201 user      20   0 2637920 204384 117264 S   0.0  10.0   0:00.27`- JS Helper           
28202 user      20   0 2637920 204384 117264 S   0.0  10.0   0:00.01`- Link Monitor        
28203 user      20   0 2637920 204384 117264 R  25.0  10.0   1:27.33`- Socket Thread    

$ sudo strace -p 28203 # "Socket Thread"
clock_gettime(CLOCK_MONOTONIC, {tv_sec=140672, tv_nsec=532364377}) = 0
poll([{fd=20, events=POLLIN|POLLPRI}, {fd=44, events=POLLIN|POLLPRI}], 2, -1) = 1 ([{fd=44, revents=POLLIN|POLLHUP}])
clock_gettime(CLOCK_MONOTONIC, {tv_sec=140672, tv_nsec=532940175}) = 0
write(36, "\372", 1)                    = 1
clock_gettime(CLOCK_MONOTONIC, {tv_sec=140672, tv_nsec=533548644}) = 0
clock_gettime(CLOCK_MONOTONIC, {tv_sec=140672, tv_nsec=533617441}) = 0
clock_gettime(CLOCK_MONOTONIC, {tv_sec=140672, tv_nsec=533651895}) = 0
clock_gettime(CLOCK_MONOTONIC, {tv_sec=140672, tv_nsec=533697640}) = 0
clock_gettime(CLOCK_MONOTONIC, {tv_sec=140672, tv_nsec=533730541}) = 0
clock_gettime(CLOCK_MONOTONIC, {tv_sec=140672, tv_nsec=533763793}) = 0
clock_gettime(CLOCK_MONOTONIC, {tv_sec=140672, tv_nsec=533794069}) = 0
poll([{fd=20, events=POLLIN|POLLPRI}, {fd=44, events=POLLIN|POLLPRI}], 2, -1) = 2 ([{fd=20, revents=POLLIN}, {fd=44, revents=POLLIN|POLLHUP}])
clock_gettime(CLOCK_MONOTONIC, {tv_sec=140672, tv_nsec=533871943}) = 0
write(36, "\372", 1)                    = 1
read(20, "M", 2048)                     = 1
clock_gettime(CLOCK_MONOTONIC, {tv_sec=140672, tv_nsec=534128569}) = 0
clock_gettime(CLOCK_MONOTONIC, {tv_sec=140672, tv_nsec=534153766}) = 0
clock_gettime(CLOCK_MONOTONIC, {tv_sec=140672, tv_nsec=534180235}) = 0
clock_gettime(CLOCK_MONOTONIC, {tv_sec=140672, tv_nsec=534204550}) = 0
clock_gettime(CLOCK_MONOTONIC, {tv_sec=140672, tv_nsec=534229797}) = 0
poll([{fd=20, events=POLLIN|POLLPRI}, {fd=44, events=POLLIN|POLLPRI}], 2, -1) = 1 ([{fd=44, revents=POLLIN|POLLHUP}])
clock_gettime(CLOCK_MONOTONIC, {tv_sec=140672, tv_nsec=534299175}) = 0
write(36, "\372", 1)                    = 1
clock_gettime(CLOCK_MONOTONIC, {tv_sec=140672, tv_nsec=534469161}) = 0
clock_gettime(CLOCK_MONOTONIC, {tv_sec=140672, tv_nsec=534496612}) = 0
clock_gettime(CLOCK_MONOTONIC, {tv_sec=140672, tv_nsec=534519885}) = 0
clock_gettime(CLOCK_MONOTONIC, {tv_sec=140672, tv_nsec=534544877}) = 0
clock_gettime(CLOCK_MONOTONIC, {tv_sec=140672, tv_nsec=534568715}) = 0
clock_gettime(CLOCK_MONOTONIC, {tv_sec=140672, tv_nsec=534594263}) = 0
clock_gettime(CLOCK_MONOTONIC, {tv_sec=140672, tv_nsec=534617456}) = 0
poll([{fd=20, events=POLLIN|POLLPRI}, {fd=44, events=POLLIN|POLLPRI}], 2, -1) = 2 ([{fd=20, revents=POLLIN}, {fd=44, revents=POLLIN|POLLHUP}])
clock_gettime(CLOCK_MONOTONIC, {tv_sec=140672, tv_nsec=534679220}) = 0
write(36, "\372", 1)                    = 1
read(20, "M", 2048)                     = 1

So the socket thread is polling with an indefinite timeout, but it’s being woken up constantly.

l-wx------ 1 user user 64 Jul 29 23:38 /proc/28303/fd/20 -> 'pipe:[395530]'
lr-x------ 1 user user 64 Jul 29 23:38 /proc/28303/fd/44 -> '/dev/shm/org.chromium.W5Sctg (deleted)'

Other tidbits:

user@host:~$ tor-browser_en-US/Browser/start-tor-browser --debug --verbose
Fontconfig warning: "/home/user/tor-browser_en-US/Browser/TorBrowser/Data/fontconfig/fonts.conf", line 85: unknown element "blank"
Fontconfig warning: "/home/user/tor-browser_en-US/Browser/TorBrowser/Data/fontconfig/fonts.conf", line 85: unknown element "blank"
Fontconfig warning: "/home/user/tor-browser_en-US/Browser/TorBrowser/Data/fontconfig/fonts.conf", line 85: unknown element "blank"
Fontconfig warning: "/home/user/tor-browser_en-US/Browser/TorBrowser/Data/fontconfig/fonts.conf", line 85: unknown element "blank"
[Parent 12501, Gecko_IOThread] WARNING: pipe error (47): Connection reset by peer: file /var/tmp/build/firefox-e79ce602ddff/ipc/chromium/src/chrome/common/ipc_channel_posix.cc, line 358
[Parent 12501, Gecko_IOThread] WARNING: pipe error (96): Connection reset by peer: file /var/tmp/build/firefox-e79ce602ddff/ipc/chromium/src/chrome/common/ipc_channel_posix.cc, line 358


user@host:~$ sudo ltrace -p $(pgrep firefox.real)
pthread_mutex_lock(0x7160b93559b0, 1, 0x7160b9585a70, 0x7160b938a3a0)                                    = 0
pthread_mutex_unlock(0x7160b93559b0, 0, 0x7160a1162000, 0x7160b93f3de0)                                  = 0
pthread_mutex_lock(0x7160b93559b0, 1, 0x7160b9585a70, 0x7160b938a3a0)                                    = 0
pthread_mutex_unlock(0x7160b93559b0, 0, 0x7160a1162000, 0x7160b93f3de0)                                  = 0
pthread_mutex_lock(0x7160b93559b0, 1, 0x7ffdefabe6f0, 0x7160b61c3f20)                                    = 0
pthread_mutex_unlock(0x7160b93559b0, 1, 110, 109)                                                        = 0
clock_gettime(1, 0x7ffdefabe6a0, 0x7160b93559b0, 0)                                                      = 0

Hopefully someone can make some sense of it, because I haven’t a clue. I guess the next step would be to attach to firefox with gdb, but I’m having a nervous breakdown just thinking about it.

For reproduction it needs to be connected to Tor running in another VM.

Seems you made great progress already. So far you’re the only chance of getting this fixed for real.

I’ve read over the qubes thread and sdwdate documentation several times now, and to be perfectly honest with you, I still don’t fully grasp the situation. The whole time syncronization scheme on Qubes-Whonix is utterly confusing to me.

It was, via the transproxy, with tor-launcher disabled. I verified with the onion circuits app.

I’m guessing by “one machine” you mean a whonix-ws-15 based AppVM such as anon-whonix, and by “another machine” you mean a whonix-gw-15 based AppVM such as sys-whonix?

As I understand, when sdwdate in sys-whonix is restarted, whether manually or by the suspend-post hook, it calls date to set the time in the sys-whonix VM. Is that correct? And you’re saying that the call to date, whether it’s called by sdwdate or anything else, is what causes Tor Browser in other VMs to act up?

What I don’t understand is: what does sdwdate in sys-whonix have to do with sdwdate in anon-whonix or any other VM? I was under the impression that sdwdate in each VM operates independently of any other VMs, except for the Tor SOCKSPort obviously. From what I can tell, sdwdate in workstation VMs continues to work even when sdwdate is not running in sys-whonix. Do the sdwdate processes interact across different VMs in some way that I don’t know about?

So then, why are Whonix VMs the only ones affected? If this is not related to the workstation’s sdwdate, then it should affect Tor Browsers equally in any VM of any template, no?

Here’s a summary of my findings so far:
Tor Browser (AnonDist) in whonix-ws-15 with sdwdate: bad
Tor Browser (AnonDist) in whonix-ws-15 without sdwdate: bad
Tor Browser in whonix-ws-15 with sdwdate: bad
Tor Browser in whonix-ws-15 without sdwdate: bad
Tor Browser in fedora-{30,31,32} without sdwdate: ok
Tor Browser in debian-10 without sdwdate: ok
Tor Browser in debian-10 with sdwdate: ok
Tor Browser with TOR_SKIP_LAUNCH=1 TOR_TRANSPROXY=1 in debian-10 with sdwdate: ok

When I ran sdwdate in debian-10 (which is not exactly trivial by the way), the appeared to sync fine and it didn’t log any errors, but it the VM doesn’t show up in the sys-whonix tray menu (sdwdate-gui-qubes). Based on your comment about date, it doesn’t sound like it would matter whether sdwdate-gui is registered with sdwdate-gui-qubes or not. However that’s the only remaining difference between the debian-10 test case and whonix-ws-15, that I can think of at this time.

I just started reading the long thread about how sdwdate-gui-qubes came to be. I’ll follow up on that if I find out anything useful. In the meantime, feel free to shed some light on any of the above.

ak88 via Whonix Forum:

I’m guessing by “one machine” you mean a whonix-ws-15 based AppVM such as anon-whonix, and by “another machine” you mean a whonix-gw-15 based AppVM such as sys-whonix?

Yes.

I must use “one machine” and “another machine” because it has to be
reproduced with as little Whonix-specifics as possible.

As I understand, when sdwdate in sys-whonix is restarted, whether manually or by the suspend-post hook, it calls date to set the time in the sys-whonix VM. Is that correct? And you’re saying that the call to date, whether it’s called by sdwdate or anything else, is what causes Tor Browser in other VMs to act up?

Yes.

I was under the impression that sdwdate in each VM operates independently of any other VMs, except for the Tor SOCKSPort obviously.

Correct.

From what I can tell, sdwdate in workstation VMs continues to work even when sdwdate is not running in sys-whonix.

Expected.

Do the sdwdate processes interact across different VMs in some way that I don’t know about?

No.

So then, why are Whonix VMs the only ones affected?

It does back to the “one VM”, “other VM” thing.

What I don’t understand is: what does sdwdate in sys-whonix have to do
with sdwdate in anon-whonix or any other VM?

Because Tor is running on Whonix-Gateway (“other VM”). And Tor Browser
(running in “one VM”) is talking to Tor which is running in the “other
VM”. Tor, kernel, networking or anything changing time in the “other VM”
seems to confuse Tor Browser.

I managed to strip out sdwdate from reproduction already. sdwdate is
unrelated. Changing time by using date on Whonix-Gateway the
triggering condition.

Didn’t reproduce in a “one VM”, “other VM” setup with split Tor (similar
gateway / workstation) yet.

When I ran sdwdate in debian-10 (which is not exactly trivial by the way),

Indeed.

but it the VM doesn’t show up in the sys-whonix tray menu (sdwdate-gui-qubes).

That’s undocumented. Non-obvious for sure. Requires Qubes dom0 qrexec
configuration.

Based on your comment about date, it doesn’t sound like it would matter whether sdwdate-gui is registered with sdwdate-gui-qubes or not.

No, the sdwdate-gui anything is completely unrelated.

Thanks, I have a much better understanding now.

Oh, I guess I could have skipped over all that sdwdate stuff then. Meh, I’ll call it a learning exercise. I thought it was still just a hunch; I didn’t realize you actually reproduced it yet. My mistake.

I see. So, in my testing so far, date was just being invoked by sdwdate instead of by the user. I’ll cut sdwdate out of the picture from now on though.

So at this point I guess we’re trying to figure out what’s causing the the tight loop in Tor Browser, and why it only happens on Whonix and not Debian?

I’ve already exhausted all the high-level methods I can think of. One approach is straight-up debugging of course, which will probably involve rebuilding the whole Tor Browser with debugging symbols and what not. And I can only imagine what the Firefox codebase looks like down in the weeds. I’m not looking forward to touching that hot mess.

Do you think an alternative approach would be to incrementally turn Debian into Whonix by installing packages from the Whonix repo, and try to narrow down when and where the problem first appears?

ak88 via Whonix Forum:

Thanks, I have a much better understanding now.

Oh, I guess I could have skipped over all that sdwdate stuff then. Meh, I’ll call it a learning exercise. I thought it was still just a hunch; I didn’t realize you actually reproduced it yet. My mistake.

I might not have expressed that.

I see. So, in my testing so far, date was just being invoked by sdwdate instead of by the user. I’ll cut sdwdate out of the picture from now on though.

So at this point I guess we’re trying to figure out what’s causing the the tight loop in Tor Browser, and why it only happens on Whonix and not Debian?

I guess it would happen in Debian equally (one VM Tor Browser, another
VM running Tor).

I’ve already exhausted all the high-level methods I can think of. One approach is straight-up debugging of course, which will probably involve rebuilding the whole Tor Browser with debugging symbols and what not.

That would be great!

Do you think an alternative approach would be to incrementally turn Debian into Whonix by installing packages from the Whonix repo, and try to narrow down when and where the problem first appears?

That would be an option.

However, I guess there isn’t much Whonix specific here.

one VM Tor Browser, another VM running Tor - if you could reproduce that

  • that might already suffice to trigger this Tor Browser bug by running
    date in the other VM (the VM running Tor).

Well, I haven’t been able to reproduce it on Debian, even after verifying that it was in fact proxying through sys-whonix and not the local bundled Tor client. It was via the transparent proxy though. I guess it could be SOCKS-specific, so maybe I’ll try it again using the SOCKSPort.

It turns out, Tor project actually distributes a binary version with debugging symbols enabled. Mad props to them; they’ve really thought of everything. I just extracted it in a Whonix DispVM minutes ago, so I’ll be trying that shortly.

https://dist.torproject.org/torbrowser/<version>/tor-browser-linux64-<version>-debug.tar.xz

Again, I haven’t been able to reproduce it anywhere but Whonix. Even when Debian and Whonix-WS are connected to the same gateway, I can trigger the Whonix one every time just about, but never the Debian one. (I still have to try with SOCKS, though.) It doesn’t mean it’s Whonix’s fault necessarily, but it does seem Whonix-specific to me so far.

1 Like

Excellent problem solving, debugging skills. :slight_smile:

Okay, I definitely took the scenic route, but I think I can rule out Whonix Workstation at least. (Gateway is up for debate.)

I can reproduce this in a vanilla Tor Browser using the local bundled Tor, just by killing the Tor process. Even after restarting and reconnecting Tor, the browser uses 100% CPU. Apparently whenever the control socket is closed by the other end, Tor Browser / Tor Launcher doesn’t know how to handle it and starts eating CPU. It also happens even if you just kill systemd-socket-proxyd locally in anon-whonix. You can observe using tor-prompt from the python-stem package:

.tb/tor-browser_en-US/Browser/start-tor-browser &
tor-prompt -i 10.152.152.10:9051 #or 127.0.0.1:9151
>>> help
510 Command filtered
# in sys-whonix: suspend/resume, restart sdwdate, kill Tor, or just kill the VM
>>> help
(y/n) control port has closed. Do you want to continue this interpreter?
# Tor Browser now eating CPU

When sdwdate restarts in sys-whonix, the control connection is broken; though I’m not sure if it’s Tor, onion-grater, or systemd-socket-proxyd, or something else in the chain.

tor-prompt -i 10.152.152.10:9051
>>> help
510 Command filtered
# restart sdwdate via gui
>>> help
(y/n) control port has closed. Do you want to continue this interpreter?

I was able to reproduce it in Debian by changing about:config extensions.torlauncher.control_host=10.152.152.10 and extensions.torlauncher.control_port=9051. Tor Launcher doesn’t give any indication of whether it’s connected to the control port or not until you hit “new identity,” so I didn’t realize it wasn’t it wasn’t connected to the control port.

So we definitely have an upstream bug with Tor Browser Bundle.

However I still have not been able to trigger it by using date on sys-whonix. This part might be Whonix-specific, because when sdwdate restarts in sys-whonix it causes a bunch of other services to restart, including onion-grater which breaks the control connection which would cause the aforementioned behavior. It would also explain why it only happens when sdwdate is restarted, and not every time sdwdate does a background sync while running normally (assuming that is the case). But again, I haven’t been able to trigger it with date myself, so I don’t know. Do you have to change the time by a certain minimum amount to trigger it? What commands did you use?

sys-whonix log after restarting sdwdate:

Aug 04 20:58:45 host systemd[1]: Stopped Secure Distributed Web Date.
Aug 04 20:58:45 host audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=sdwdate comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Aug 04 20:58:45 host systemd[1]: Starting Secure Distributed Web Date...
Aug 04 20:58:46 host audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=sdwdate comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Aug 04 20:58:46 host systemd[1]: Started Secure Distributed Web Date.
Aug 04 20:58:46 host sdwdate[4141]: 2020-08-04 20:58:46 - sdwdate - INFO - sdwdate started. PID: 4141
Aug 04 20:58:46 host sdwdate[4141]: 2020-08-04 20:58:46 - sdwdate - INFO - create temp_dir: /tmp/tmp.bxXOtakBrq
Aug 04 20:58:46 host sdwdate[4141]: 2020-08-04 20:58:46 - sdwdate - INFO - Tor socks host: 127.0.0.1 Tor socks port: 9108
Aug 04 20:58:46 host sdwdate[4141]: 2020-08-04 20:58:46 - sdwdate - INFO - Running sdwdate main loop. iteration: 1 / 10000
Aug 04 20:58:46 host sdwdate[4141]: 2020-08-04 20:58:46 - sdwdate - INFO - Prerequisite check: The clock is sane.
Aug 04 20:58:46 host sdwdate[4141]: Within build timestamp Fri 11 Oct 2019 12:55:17 AM UTC and expiration timestamp Tue 17 May 2033 10:00:00 AM UTC.
Aug 04 20:58:46 host sdwdate[4141]: Clock within consensus parameters consensus/valid-after 2020-08-04 19:00:00 and consensus/valid-until 2020-08-04 22:00:00.
Aug 04 20:58:46 host sdwdate[4141]: 2020-08-04 20:58:46 - sdwdate - INFO - Prerequisite check: Tor fully bootstrapped.
Aug 04 20:58:46 host sdwdate[4141]: 2020-08-04 20:58:46 - sdwdate - INFO - Start fetching remote times.
Aug 04 20:58:46 host sdwdate[4141]: 2020-08-04 20:58:46 - sdwdate - INFO - Initial time fetching in progress...
Aug 04 20:58:46 host sdwdate[4141]: 2020-08-04 20:58:46 - sdwdate - INFO - Running sdwdate fetch loop. iteration: 1
Aug 04 20:58:46 host sdwdate[4141]: 2020-08-04 20:58:46 - sdwdate - INFO - Requested urls ['mprt35sjunnxfa76.onion', 'gnjtzu5c2lv4zasv.onion', '2h3xkc7wmxthijqb.onion']
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - Returned urls "['mprt35sjunnxfa76.onion', 'gnjtzu5c2lv4zasv.onion', '2h3xkc7wmxthijqb.onion']"
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - remote 0: mprt35sjunnxfa76.onion
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - * comment: https://informant.taz.de        https://web.archive.org/web/20170329061908/https://informant.taz.de
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - * remote_unixtime: 1596574777
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - * consensus/valid-after: 2020-08-04 19:00:00
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - * remote_time          : 2020-08-04 20:59:37
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - * consensus/valid-until: 2020-08-04 22:00:00
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - * time_diff: 30 second(s)
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - * timesanitycheck: sane
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - * time_consensus_sanity_check: sane
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - * remote_status: True
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - remote 1: gnjtzu5c2lv4zasv.onion
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - * comment: https://pgp.key-server.io        https://web.archive.org/web/20170421212020/https://pgp.key-server.io
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - * remote_unixtime: 1596574775
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - * consensus/valid-after: 2020-08-04 19:00:00
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - * remote_time          : 2020-08-04 20:59:35
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - * consensus/valid-until: 2020-08-04 22:00:00
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - * time_diff: 28 second(s)
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - * timesanitycheck: sane
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - * time_consensus_sanity_check: sane
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - * remote_status: True
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - remote 2: 2h3xkc7wmxthijqb.onion
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - * comment: https://www.privacyfoundation.ch/de/kontakt.html        https://web.archive.org/web/20151210044252/http://www.privacyfoundation.ch/de/kontakt.html
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - * remote_unixtime: 1596574792
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - * consensus/valid-after: 2020-08-04 19:00:00
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - * remote_time          : 2020-08-04 20:59:52
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - * consensus/valid-until: 2020-08-04 22:00:00
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - * time_diff: 45 second(s)
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - * timesanitycheck: sane
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - * time_consensus_sanity_check: sane
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - * remote_status: True
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - Pool 1: mprt35sjunnxfa76.onion, web unixtime: 1596574777, web time: Tue Aug 04 20:59:37 UTC 2020, diff: 30 seconds
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - Pool 2: gnjtzu5c2lv4zasv.onion, web unixtime: 1596574775, web time: Tue Aug 04 20:59:35 UTC 2020, diff: 28 seconds
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - Pool 3: 2h3xkc7wmxthijqb.onion, web unixtime: 1596574792, web time: Tue Aug 04 20:59:52 UTC 2020, diff: 45 seconds
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - End fetching remote times.
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - Pool differences, sorted: [28, 30, 45]
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - Median time difference: +30.000000000
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - randomize             : -0.129371830
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - New time difference   : +29.870628170
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - Old unixttime: 1596574747.067768335
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - New unixtime : 1596574776.938396454
Aug 04 20:59:07 host sdwdate[4141]: 2020-08-04 20:59:07 - sdwdate - INFO - Instantly setting the time by using command: /bin/date --set "@1596574776.938396454"
Aug 04 20:59:36 host sdwdate[4141]: 2020-08-04 20:59:36 - sdwdate - INFO - /bin/date output: Tue 04 Aug 2020 08:59:36 PM UTC
Aug 04 20:59:36 host sdwdate[4141]:  
Aug 04 20:59:36 host firewall-restarter[679]: + '[' /run/sdwdate/first_success = /run/sdwdate/first_success ']'
Aug 04 20:59:36 host firewall-restarter[679]: + systemctl --no-pager --no-block status whonix-firewall
Aug 04 20:59:36 host sdwdate[4141]: 2020-08-04 20:59:36 - sdwdate - INFO - Success. Sleeping for 105.16666666666667 minutes.
Aug 04 20:59:36 host sdwdate[4141]: 2020-08-04 20:59:36 - sdwdate - INFO - Running command: sleep 6310.216409500
Aug 04 20:59:36 host firewall-restarter[679]: ● whonix-firewall.service - Whonix firewall loader
Aug 04 20:59:36 host firewall-restarter[679]:    Loaded: loaded (/lib/systemd/system/whonix-firewall.service; enabled; vendor preset: enabled)
Aug 04 20:59:36 host firewall-restarter[679]:    Active: active (exited) since Tue 2020-08-04 20:49:01 UTC; 10min ago
Aug 04 20:59:36 host firewall-restarter[679]:      Docs: https://www.whonix.org/wiki/Whonix_Firewall
Aug 04 20:59:36 host firewall-restarter[679]:   Process: 3819 ExecStart=/usr/lib/whonix-firewall/enable-firewall (code=exited, status=0/SUCCESS)
Aug 04 20:59:36 host firewall-restarter[679]:  Main PID: 3819 (code=exited, status=0/SUCCESS)
Aug 04 20:59:36 host firewall-restarter[679]: Aug 04 20:49:00 host systemd[1]: Starting Whonix firewall loader...
Aug 04 20:59:36 host firewall-restarter[679]: Aug 04 20:49:00 host enable-firewall[3819]: 2020-08-04 20:49:00 - /usr/bin/whonix-gateway-firewall - OK: Loading Whonix firewall...
Aug 04 20:59:36 host firewall-restarter[679]: Aug 04 20:49:00 host enable-firewall[3819]: 2020-08-04 20:49:00 - /usr/bin/whonix-gateway-firewall - OK: Skipping firewall mode detection since already set to 'full'.
Aug 04 20:59:36 host firewall-restarter[679]: Aug 04 20:49:00 host enable-firewall[3819]: 2020-08-04 20:49:00 - /usr/bin/whonix-gateway-firewall - OK: (Full torified network access allowed.)
Aug 04 20:59:36 host firewall-restarter[679]: Aug 04 20:49:01 host enable-firewall[3819]: 2020-08-04 20:49:01 - /usr/bin/whonix-gateway-firewall - OK: Whonix firewall loaded.
Aug 04 20:59:36 host firewall-restarter[679]: Aug 04 20:49:01 host systemd[1]: Started Whonix firewall loader.
Aug 04 20:59:36 host firewall-restarter[679]: + systemctl --no-pager --no-block restart whonix-firewall
Aug 04 20:59:36 host systemd[1]: qubes-whonix-network.service: Succeeded.
Aug 04 20:59:36 host systemd[1]: Stopped sets up Qubes-Whonix-Gateway external and internal network interface.
Aug 04 20:59:36 host audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=qubes-whonix-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Aug 04 20:59:36 host systemd[1]: Stopping sets up Qubes-Whonix-Gateway external and internal network interface...
Aug 04 20:59:36 host systemd[1]: Stopping Tor control port filter proxy...
Aug 04 20:59:36 host firewall-restarter[679]: + read file_name
Aug 04 20:59:36 host systemd[1]: onion-grater.service: Main process exited, code=killed, status=15/TERM
Aug 04 20:59:36 host systemd[1]: onion-grater.service: Succeeded.
Aug 04 20:59:36 host systemd[1]: Stopped Tor control port filter proxy.
Aug 04 20:59:36 host audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=onion-grater comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Aug 04 20:59:36 host systemd[1]: whonix-firewall.service: Succeeded.
Aug 04 20:59:36 host audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=whonix-firewall comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Aug 04 20:59:36 host systemd[1]: Stopped Whonix firewall loader.
Aug 04 20:59:36 host systemd[1]: Stopping Whonix firewall loader...
Aug 04 20:59:36 host systemd[1]: Starting Whonix firewall loader...
Aug 04 20:59:36 host enable-firewall[4206]: 2020-08-04 20:59:36 - /usr/bin/whonix-gateway-firewall - OK: Loading Whonix firewall...
Aug 04 20:59:36 host enable-firewall[4206]: 2020-08-04 20:59:36 - /usr/bin/whonix-gateway-firewall - OK: Skipping firewall mode detection since already set to 'full'.
Aug 04 20:59:37 host enable-firewall[4206]: 2020-08-04 20:59:37 - /usr/bin/whonix-gateway-firewall - OK: (Full torified network access allowed.)
Aug 04 20:59:37 host enable-firewall[4206]: 2020-08-04 20:59:37 - /usr/bin/whonix-gateway-firewall - OK: Whonix firewall loaded.
Aug 04 20:59:37 host systemd[1]: Started Whonix firewall loader.
Aug 04 20:59:37 host audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=whonix-firewall comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Aug 04 20:59:37 host systemd[1]: Starting Tor control port filter proxy...
Aug 04 20:59:37 host systemd[1]: Starting sets up Qubes-Whonix-Gateway external and internal network interface...
Aug 04 20:59:37 host network-proxy-setup[4361]: + source /usr/lib/qubes-whonix/utility_functions.sh
Aug 04 20:59:37 host network-proxy-setup[4361]: ++ systemctl -q is-active qubes-updates-proxy-forwarder.socket
Aug 04 20:59:37 host network-proxy-setup[4361]: ++ PROXY_SERVER=http://10.137.255.254:8082/
Aug 04 20:59:37 host network-proxy-setup[4361]: ++ PROXY_META='<meta name="application-name" content="tor proxy"/>'
Aug 04 20:59:37 host network-proxy-setup[4361]: ++ qubesdb-read /qubes-vm-type
Aug 04 20:59:37 host network-proxy-setup[4361]: + qubes_vm_type=ProxyVM
Aug 04 20:59:37 host network-proxy-setup[4361]: + '[' ProxyVM = TemplateVM ']'
Aug 04 20:59:37 host network-proxy-setup[4361]: + '[' '!' -e /usr/share/anon-gw-base-files/gateway ']'
Aug 04 20:59:37 host network-proxy-setup[4361]: + /sbin/ethtool -K eth0 sg off
Aug 04 20:59:37 host network-proxy-setup[4361]: + INTERFACE=eth1
Aug 04 20:59:37 host network-proxy-setup[4361]: + ip link show eth1
Aug 04 20:59:37 host network-proxy-setup[4361]: 3: eth1: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
Aug 04 20:59:37 host network-proxy-setup[4361]: + exit 0
Aug 04 20:59:37 host systemd[1]: Started sets up Qubes-Whonix-Gateway external and internal network interface.
Aug 04 20:59:37 host audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=qubes-whonix-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Aug 04 20:59:37 host onion-grater[4366]: Tor control port filter started, listening on 10.137.0.69:9051
Aug 04 20:59:37 host onion-grater[4366]: Serving Thread started
Aug 04 20:59:38 host systemd[1]: Started Tor control port filter proxy.
Aug 04 20:59:38 host audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=onion-grater comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Aug 04 21:00:48 host qubes.StartApp+qubes-run-terminal-dom0[4085]: 
Aug 04 21:00:48 host qubes.StartApp+qubes-run-terminal-dom0[4085]: (xfce4-terminal:4095): Gdk-CRITICAL **: 21:00:48.868: gdk_window_get_window_type: assertion 'GDK_IS_WINDOW (window)' failed
Aug 04 21:01:09 host sudo[4127]: pam_unix(sudo:session): session closed for user root
Aug 04 21:01:09 host audit[4127]: USER_END pid=4127 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:session_close grantors=pam_permit,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success'
Aug 04 21:01:09 host audit[4127]: CRED_DISP pid=4127 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:setcred grantors=pam_tally2,pam_wheel,pam_permit acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success'
Aug 04 21:01:19 host audit[4378]: USER_ACCT pid=4378 uid=1000 auid=4294967295 ses=4294967295 msg='op=PAM:accounting grantors=pam_tally2,pam_permit acct="user" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success'
Aug 04 21:01:19 host sudo[4378]:     user : TTY=pts/0 ; PWD=/home/user ; USER=root ; COMMAND=/usr/bin/journalctl -e
Aug 04 21:01:19 host sudo[4378]: pam_unix(sudo:session): session opened for user root by (uid=0)
Aug 04 21:01:19 host audit[4378]: USER_CMD pid=4378 uid=1000 auid=4294967295 ses=4294967295 msg='cwd="/home/user" cmd=6A6F75726E616C63746C202D65 terminal=pts/0 res=success'
Aug 04 21:01:19 host audit[4378]: CRED_REFR pid=4378 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:setcred grantors=pam_tally2,pam_wheel,pam_permit acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success'
Aug 04 21:01:19 host audit[4378]: USER_START pid=4378 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:session_open grantors=pam_permit,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success'
2 Likes

Great work,

Would one the testing scenarios mentioned with sdwdate affect or be partial required for remote desktop access from another VM?

thanks,

Perhaps it can be triggered by restarting onion-grater only?

Can’t reproduce this.

If I sudo systemctl restart sdwdate in sys-whonix I don’t see onion-grater restarting. It shouldn’t. I wouldn’t know why.

Good thinking. I just tried it and yes, it does.

Hmm, you’re right, it doesn’t seem to happen with systemctl restart sdwdate

Aug 05 23:12:06 host sudo[19359]:     user : TTY=pts/1 ; PWD=/home/user ; USER=root ; COMMAND=/usr/bin/systemctl restart sdwdate
Aug 05 23:12:06 host audit[19359]: USER_CMD pid=19359 uid=1000 auid=4294967295 ses=4294967295 msg='cwd="/home/user" cmd=73797374656D63746C20726573746172742073647764617465 terminal=pts/1 res=success'
Aug 05 23:12:06 host sudo[19359]: pam_unix(sudo:session): session opened for user root by (uid=0)
Aug 05 23:12:06 host audit[19359]: CRED_REFR pid=19359 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:setcred grantors=pam_tally2,pam_wheel,pam_permit acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/1 res=success'
Aug 05 23:12:06 host audit[19359]: USER_START pid=19359 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:session_open grantors=pam_permit,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/1 res=success'
Aug 05 23:12:06 host systemd[1]: Starting Secure Distributed Web Date...
Aug 05 23:12:07 host systemd[1]: Started Secure Distributed Web Date.
Aug 05 23:12:07 host sdwdate[19362]: 2020-08-05 23:12:07 - sdwdate - INFO - sdwdate started. PID: 19362
Aug 05 23:12:07 host audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=sdwdate comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Aug 05 23:12:07 host sdwdate[19362]: 2020-08-05 23:12:07 - sdwdate - INFO - create temp_dir: /tmp/tmp.0lO2YpDfPN
Aug 05 23:12:07 host audit[19359]: USER_END pid=19359 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:session_close grantors=pam_permit,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/1 res=success'
Aug 05 23:12:07 host audit[19359]: CRED_DISP pid=19359 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:setcred grantors=pam_tally2,pam_wheel,pam_permit acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/1 res=success'
Aug 05 23:12:07 host sudo[19359]: pam_unix(sudo:session): session closed for user root
Aug 05 23:12:07 host sdwdate[19362]: 2020-08-05 23:12:07 - sdwdate - INFO - Tor socks host: 127.0.0.1 Tor socks port: 9108
Aug 05 23:12:07 host sdwdate[19362]: 2020-08-05 23:12:07 - sdwdate - INFO - Running sdwdate main loop. iteration: 1 / 10000
Aug 05 23:12:07 host sdwdate[19362]: 2020-08-05 23:12:07 - sdwdate - INFO - Prerequisite check: The clock is sane.
Aug 05 23:12:07 host sdwdate[19362]: Within build timestamp Fri 11 Oct 2019 12:55:17 AM UTC and expiration timestamp Tue 17 May 2033 10:00:00 AM UTC.
Aug 05 23:12:07 host sdwdate[19362]: Clock within consensus parameters consensus/valid-after 2020-08-05 21:00:00 and consensus/valid-until 2020-08-06 00:00:00.
Aug 05 23:12:07 host sdwdate[19362]: 2020-08-05 23:12:07 - sdwdate - INFO - Prerequisite check: Tor fully bootstrapped.
Aug 05 23:12:07 host sdwdate[19362]: 2020-08-05 23:12:07 - sdwdate - INFO - Start fetching remote times.
Aug 05 23:12:07 host sdwdate[19362]: 2020-08-05 23:12:07 - sdwdate - INFO - Time fetching in progress...
Aug 05 23:12:07 host sdwdate[19362]: 2020-08-05 23:12:07 - sdwdate - INFO - Running sdwdate fetch loop. iteration: 1
Aug 05 23:12:07 host sdwdate[19362]: 2020-08-05 23:12:07 - sdwdate - INFO - Requested urls ['gmg7jl25ony5g7ws.onion', 'ak2uqfavwgmjrvtu.onion', 'tinhat233xymse34.onion']
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - Returned urls "['gmg7jl25ony5g7ws.onion', 'ak2uqfavwgmjrvtu.onion', 'tinhat233xymse34.onion']"
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - remote 0: gmg7jl25ony5g7ws.onion
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - * comment: Gizmodo Media Group        https://specialprojectsdesk.com/securedrop/        https://web.archive.org/web/20170215221547/https://specialprojectsdesk.com/securedrop/
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - * remote_unixtime: 1596669146
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - * consensus/valid-after: 2020-08-05 21:00:00
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - * remote_time          : 2020-08-05 23:12:26
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - * consensus/valid-until: 2020-08-06 00:00:00
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - * time_diff: 10 second(s)
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - * timesanitycheck: sane
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - * time_consensus_sanity_check: sane
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - * remote_status: True
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - remote 1: ak2uqfavwgmjrvtu.onion
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - * comment: MagyarLeaks        2013        Investigative Journalism        https://atlatszo.hu/magyarleaks/        Hungary        https://web.archive.org/web/20170128142348/https://atlatszo.hu/magyarleaks/
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - * remote_unixtime: 1596669145
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - * consensus/valid-after: 2020-08-05 21:00:00
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - * remote_time          : 2020-08-05 23:12:25
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - * consensus/valid-until: 2020-08-06 00:00:00
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - * time_diff: 9 second(s)
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - * timesanitycheck: sane
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - * time_consensus_sanity_check: sane
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - * remote_status: True
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - remote 2: tinhat233xymse34.onion
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - * comment: https://thetinhat.com        https://web.archive.org/web/20170421233308/https://thetinhat.com
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - * remote_unixtime: 1596669150
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - * consensus/valid-after: 2020-08-05 21:00:00
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - * remote_time          : 2020-08-05 23:12:30
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - * consensus/valid-until: 2020-08-06 00:00:00
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - * time_diff: 14 second(s)
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - * timesanitycheck: sane
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - * time_consensus_sanity_check: sane
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - * remote_status: True
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - Pool 1: gmg7jl25ony5g7ws.onion, web unixtime: 1596669146, web time: Wed Aug 05 23:12:26 UTC 2020, diff: 10 seconds
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - Pool 2: ak2uqfavwgmjrvtu.onion, web unixtime: 1596669145, web time: Wed Aug 05 23:12:25 UTC 2020, diff: 9 seconds
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - Pool 3: tinhat233xymse34.onion, web unixtime: 1596669150, web time: Wed Aug 05 23:12:30 UTC 2020, diff: 14 seconds
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - End fetching remote times.
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - Pool differences, sorted: [9, 10, 14]
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - Median time difference: +10.000000000
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - randomize             : -0.722118940
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - New time difference   : +9.277881060
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - Old unixttime: 1596669136.589626789
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - New unixtime : 1596669145.867507935
Aug 05 23:12:16 host sdwdate[19362]: 2020-08-05 23:12:16 - sdwdate - INFO - Instantly setting the time by using command: /bin/date --set "@1596669145.867507935"
Aug 05 23:12:25 host sdwdate[19362]: 2020-08-05 23:12:25 - sdwdate - INFO - /bin/date output: Wed 05 Aug 2020 11:12:25 PM UTC
Aug 05 23:12:25 host sdwdate[19362]:  
Aug 05 23:12:25 host sdwdate[19362]: 2020-08-05 23:12:25 - sdwdate - INFO - Success. Sleeping for 139.73333333333332 minutes.
Aug 05 23:12:25 host sdwdate[19362]: 2020-08-05 23:12:25 - sdwdate - INFO - Running command: sleep 8384.255885362

But I just tried it again with the gui applet by choosing “sys-whonix → Restart sdwdate”, and it happens like before. Here’s the log (in -x format for readability):

Aug 05 22:34:28 host systemd[1]: sdwdate.service: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: https://www.debian.org/support
-- 
-- The unit sdwdate.service has successfully entered the 'dead' state.
Aug 05 22:34:28 host systemd[1]: Stopped Secure Distributed Web Date.
-- Subject: A stop job for unit sdwdate.service has finished
-- Defined-By: systemd
-- Support: https://www.debian.org/support
-- 
-- A stop job for unit sdwdate.service has finished.
-- 
-- The job identifier is 2309 and the job result is done.
Aug 05 22:34:28 host audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=sdwdate comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Aug 05 22:34:28 host systemd[1]: Starting Secure Distributed Web Date...
-- Subject: A start job for unit sdwdate.service has begun execution
-- Defined-By: systemd
-- Support: https://www.debian.org/support
-- 
-- A start job for unit sdwdate.service has begun execution.
-- 
-- The job identifier is 2309.
Aug 05 22:34:28 host systemd[1]: Started Secure Distributed Web Date.
-- Subject: A start job for unit sdwdate.service has finished successfully
-- Defined-By: systemd
-- Support: https://www.debian.org/support
-- 
-- A start job for unit sdwdate.service has finished successfully.
-- 
-- The job identifier is 2309.
Aug 05 22:34:28 host sdwdate[18010]: 2020-08-05 22:34:28 - sdwdate - INFO - sdwdate started. PID: 18010
Aug 05 22:34:28 host audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=sdwdate comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Aug 05 22:34:28 host sdwdate[18010]: 2020-08-05 22:34:28 - sdwdate - INFO - create temp_dir: /tmp/tmp.5jSUnsnxFj
Aug 05 22:34:28 host sdwdate[18010]: 2020-08-05 22:34:28 - sdwdate - INFO - Tor socks host: 127.0.0.1 Tor socks port: 9108
Aug 05 22:34:28 host sdwdate[18010]: 2020-08-05 22:34:28 - sdwdate - INFO - Running sdwdate main loop. iteration: 1 / 10000
Aug 05 22:34:29 host sdwdate[18010]: 2020-08-05 22:34:29 - sdwdate - INFO - Prerequisite check: The clock is sane.
Aug 05 22:34:29 host sdwdate[18010]: Within build timestamp Fri 11 Oct 2019 12:55:17 AM UTC and expiration timestamp Tue 17 May 2033 10:00:00 AM UTC.
Aug 05 22:34:29 host sdwdate[18010]: Clock within consensus parameters consensus/valid-after 2020-08-05 21:00:00 and consensus/valid-until 2020-08-06 00:00:00.
Aug 05 22:34:29 host sdwdate[18010]: 2020-08-05 22:34:29 - sdwdate - INFO - Prerequisite check: Tor fully bootstrapped.
Aug 05 22:34:29 host sdwdate[18010]: 2020-08-05 22:34:29 - sdwdate - INFO - Start fetching remote times.
Aug 05 22:34:29 host sdwdate[18010]: 2020-08-05 22:34:29 - sdwdate - INFO - Initial time fetching in progress...
Aug 05 22:34:29 host sdwdate[18010]: 2020-08-05 22:34:29 - sdwdate - INFO - Running sdwdate fetch loop. iteration: 1
Aug 05 22:34:29 host sdwdate[18010]: 2020-08-05 22:34:29 - sdwdate - INFO - Requested urls ['33y6fjyhs3phzfjj.onion', 'bvhfrslkkdbncdhv.onion', 'yuxv6qujajqvmypv.onion']
Aug 05 22:34:49 host audit[18063]: USER_ACCT pid=18063 uid=1000 auid=4294967295 ses=4294967295 msg='op=PAM:accounting grantors=pam_tally2,pam_permit acct="user" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Aug 05 22:34:49 host sudo[18063]:     user : TTY=unknown ; PWD=/ ; USER=whonixcheck ; COMMAND=/usr/lib/whonixcheck/whonixcheck --daemon --gui --cli
Aug 05 22:34:49 host sudo[18063]: pam_unix(sudo:session): session opened for user whonixcheck by (uid=0)
Aug 05 22:34:49 host audit[18063]: USER_CMD pid=18063 uid=1000 auid=4294967295 ses=4294967295 msg='cwd="/" cmd=2F7573722F6C69622F77686F6E6978636865636B2F77686F6E6978636865636B202D2D6461656D6F6E202D2D677569202D2D636C69 terminal=? res=success'
Aug 05 22:34:49 host audit[18063]: CRED_REFR pid=18063 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:setcred grantors=pam_tally2,pam_wheel,pam_permit acct="whonixcheck" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Aug 05 22:34:49 host audit[18063]: USER_START pid=18063 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:session_open grantors=pam_permit,pam_unix acct="whonixcheck" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Aug 05 22:34:49 host audit[18116]: USER_ACCT pid=18116 uid=110 auid=4294967295 ses=4294967295 msg='op=PAM:accounting grantors=pam_tally2,pam_permit acct="whonixcheck" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Aug 05 22:34:49 host sudo[18116]: whonixcheck : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/usr/bin/systemd-detect-virt
Aug 05 22:34:49 host sudo[18116]: pam_unix(sudo:session): session opened for user root by (uid=0)
Aug 05 22:34:49 host audit[18116]: USER_CMD pid=18116 uid=110 auid=4294967295 ses=4294967295 msg='cwd="/" cmd="systemd-detect-virt" terminal=? res=success'
Aug 05 22:34:49 host audit[18116]: CRED_REFR pid=18116 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:setcred grantors=pam_tally2,pam_wheel,pam_permit acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Aug 05 22:34:49 host audit[18116]: USER_START pid=18116 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:session_open grantors=pam_permit,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Aug 05 22:34:49 host sudo[18116]: pam_unix(sudo:session): session closed for user root
Aug 05 22:34:49 host audit[18116]: USER_END pid=18116 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:session_close grantors=pam_permit,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Aug 05 22:34:49 host audit[18116]: CRED_DISP pid=18116 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:setcred grantors=pam_tally2,pam_wheel,pam_permit acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Aug 05 22:34:54 host sudo[18769]:     user : TTY=unknown ; PWD=/home/user ; USER=root ; ENV=msgdisptacher_username=whonixcheck msgdispatcher_identifier=whonixcheck msgdispatcher_appendix=messagex_done ; COMMAND=/usr/lib/msgcollector/msgdispatcher_delete_wrapper
Aug 05 22:34:54 host audit[18769]: USER_ACCT pid=18769 uid=1000 auid=4294967295 ses=4294967295 msg='op=PAM:accounting grantors=pam_tally2,pam_permit acct="user" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Aug 05 22:34:54 host audit[18769]: USER_CMD pid=18769 uid=1000 auid=4294967295 ses=4294967295 msg='cwd="/home/user" cmd="/usr/lib/msgcollector/msgdispatcher_delete_wrapper" terminal=? res=success'
Aug 05 22:34:54 host audit[18769]: CRED_REFR pid=18769 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:setcred grantors=pam_tally2,pam_wheel,pam_permit acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Aug 05 22:34:54 host sudo[18769]: pam_unix(sudo:session): session opened for user root by (uid=0)
Aug 05 22:34:54 host audit[18769]: USER_START pid=18769 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:session_open grantors=pam_permit,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Aug 05 22:34:54 host sudo[18769]: pam_unix(sudo:session): session closed for user root
Aug 05 22:34:54 host audit[18769]: USER_END pid=18769 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:session_close grantors=pam_permit,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Aug 05 22:34:54 host audit[18769]: CRED_DISP pid=18769 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:setcred grantors=pam_tally2,pam_wheel,pam_permit acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Aug 05 22:34:54 host sudo[18063]: pam_unix(sudo:session): session closed for user whonixcheck
Aug 05 22:34:54 host audit[18063]: USER_END pid=18063 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:session_close grantors=pam_permit,pam_unix acct="whonixcheck" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Aug 05 22:34:54 host audit[18063]: CRED_DISP pid=18063 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:setcred grantors=pam_tally2,pam_wheel,pam_permit acct="whonixcheck" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success'
Aug 05 22:35:19 host sdwdate[18010]: 2020-08-05 22:35:19 - sdwdate - INFO - Returned urls "['33y6fjyhs3phzfjj.onion', 'bvhfrslkkdbncdhv.onion', 'yuxv6qujajqvmypv.onion']"
Aug 05 22:35:19 host sdwdate[18010]: 2020-08-05 22:35:19 - sdwdate - INFO - remote 0: 33y6fjyhs3phzfjj.onion
Aug 05 22:35:19 host sdwdate[18010]: 2020-08-05 22:35:19 - sdwdate - INFO - * comment: The Guardian        https://securedrop.theguardian.com        https://web.archive.org/web/20170408213324/https://securedrop.theguardian.com
Aug 05 22:35:19 host sdwdate[18010]: 2020-08-05 22:35:19 - sdwdate - INFO - * remote_unixtime: 1596666884
Aug 05 22:35:19 host sdwdate[18010]: 2020-08-05 22:35:19 - sdwdate - INFO - * consensus/valid-after: 2020-08-05 21:00:00
Aug 05 22:35:19 host sdwdate[18010]: 2020-08-05 22:35:19 - sdwdate - INFO - * remote_time          : 2020-08-05 22:34:44
Aug 05 22:35:19 host sdwdate[18010]: 2020-08-05 22:35:19 - sdwdate - INFO - * consensus/valid-until: 2020-08-06 00:00:00
Aug 05 22:35:19 host sdwdate[18010]: 2020-08-05 22:35:19 - sdwdate - INFO - * time_diff: -35 second(s)
Aug 05 22:35:19 host sdwdate[18010]: 2020-08-05 22:35:19 - sdwdate - INFO - * timesanitycheck: sane
Aug 05 22:35:19 host sdwdate[18010]: 2020-08-05 22:35:19 - sdwdate - INFO - * time_consensus_sanity_check: sane
Aug 05 22:35:19 host sdwdate[18010]: 2020-08-05 22:35:19 - sdwdate - INFO - * remote_status: True
Aug 05 22:35:19 host sdwdate[18010]: 2020-08-05 22:35:19 - sdwdate - INFO - remote 1: bvhfrslkkdbncdhv.onion
Aug 05 22:35:19 host sdwdate[18010]: 2020-08-05 22:35:19 - sdwdate - INFO - * comment: Canal Etico                http://denuncias.bufetecarrasco.net
Aug 05 22:35:19 host sdwdate[18010]: 2020-08-05 22:35:19 - sdwdate - INFO - * status: False
Aug 05 22:35:19 host sdwdate[18010]: 2020-08-05 22:35:19 - sdwdate - INFO - * value: Timeout
Aug 05 22:35:19 host sdwdate[18010]: 2020-08-05 22:35:19 - sdwdate - INFO - remote 2: yuxv6qujajqvmypv.onion
Aug 05 22:35:19 host sdwdate[18010]: 2020-08-05 22:35:19 - sdwdate - INFO - * comment: https://anonguide.cyberguerrilla.org        https://web.archive.org/web/20170421233304/https://anonguide.cyberguerrilla.org
Aug 05 22:35:19 host sdwdate[18010]: 2020-08-05 22:35:19 - sdwdate - INFO - * status: False
Aug 05 22:35:19 host sdwdate[18010]: 2020-08-05 22:35:19 - sdwdate - INFO - * value: Timeout
Aug 05 22:35:19 host sdwdate[18010]: 2020-08-05 22:35:19 - sdwdate - INFO - Pool 1: 33y6fjyhs3phzfjj.onion, web unixtime: 1596666884, web time: Wed Aug 05 22:34:44 UTC 2020, diff: -35 seconds
Aug 05 22:35:19 host sdwdate[18010]: 2020-08-05 22:35:19 - sdwdate - INFO - Running sdwdate fetch loop. iteration: 2
Aug 05 22:35:19 host sdwdate[18010]: 2020-08-05 22:35:19 - sdwdate - INFO - Requested urls ['ltcpool5brio2gaj.onion', 'h2qkxasmmqdmyiov.onion']
Aug 05 22:35:39 host sdwdate[18010]: 2020-08-05 22:35:39 - sdwdate - INFO - Returned urls "['ltcpool5brio2gaj.onion', 'h2qkxasmmqdmyiov.onion']"
Aug 05 22:35:39 host sdwdate[18010]: 2020-08-05 22:35:39 - sdwdate - INFO - remote 0: ltcpool5brio2gaj.onion
Aug 05 22:35:39 host sdwdate[18010]: 2020-08-05 22:35:39 - sdwdate - INFO - * comment: https://www.litecoinpool.org/help        https://web.archive.org/web/20161114095946/https://www.litecoinpool.org/help
Aug 05 22:35:39 host sdwdate[18010]: 2020-08-05 22:35:39 - sdwdate - INFO - * remote_unixtime: 1596666950
Aug 05 22:35:39 host sdwdate[18010]: 2020-08-05 22:35:39 - sdwdate - INFO - * consensus/valid-after: 2020-08-05 21:00:00
Aug 05 22:35:39 host sdwdate[18010]: 2020-08-05 22:35:39 - sdwdate - INFO - * remote_time          : 2020-08-05 22:35:50
Aug 05 22:35:39 host sdwdate[18010]: 2020-08-05 22:35:39 - sdwdate - INFO - * consensus/valid-until: 2020-08-06 00:00:00
Aug 05 22:35:39 host sdwdate[18010]: 2020-08-05 22:35:39 - sdwdate - INFO - * time_diff: 11 second(s)
Aug 05 22:35:39 host sdwdate[18010]: 2020-08-05 22:35:39 - sdwdate - INFO - * timesanitycheck: sane
Aug 05 22:35:39 host sdwdate[18010]: 2020-08-05 22:35:39 - sdwdate - INFO - * time_consensus_sanity_check: sane
Aug 05 22:35:39 host sdwdate[18010]: 2020-08-05 22:35:39 - sdwdate - INFO - * remote_status: True
Aug 05 22:35:39 host sdwdate[18010]: 2020-08-05 22:35:39 - sdwdate - INFO - remote 1: h2qkxasmmqdmyiov.onion
Aug 05 22:35:39 host sdwdate[18010]: 2020-08-05 22:35:39 - sdwdate - INFO - * comment: mail.systemli.org        https://www.systemli.org/en/service/mail.html        https://web.archive.org/web/20161105061658/https://www.systemli.org/en/service/mail.html
Aug 05 22:35:39 host sdwdate[18010]: 2020-08-05 22:35:39 - sdwdate - INFO - * remote_unixtime: 1596666937
Aug 05 22:35:39 host sdwdate[18010]: 2020-08-05 22:35:39 - sdwdate - INFO - * consensus/valid-after: 2020-08-05 21:00:00
Aug 05 22:35:39 host sdwdate[18010]: 2020-08-05 22:35:39 - sdwdate - INFO - * remote_time          : 2020-08-05 22:35:37
Aug 05 22:35:39 host sdwdate[18010]: 2020-08-05 22:35:39 - sdwdate - INFO - * consensus/valid-until: 2020-08-06 00:00:00
Aug 05 22:35:39 host sdwdate[18010]: 2020-08-05 22:35:39 - sdwdate - INFO - * time_diff: -2 second(s)
Aug 05 22:35:39 host sdwdate[18010]: 2020-08-05 22:35:39 - sdwdate - INFO - * timesanitycheck: sane
Aug 05 22:35:39 host sdwdate[18010]: 2020-08-05 22:35:39 - sdwdate - INFO - * time_consensus_sanity_check: sane
Aug 05 22:35:39 host sdwdate[18010]: 2020-08-05 22:35:39 - sdwdate - INFO - * remote_status: True
Aug 05 22:35:39 host sdwdate[18010]: 2020-08-05 22:35:39 - sdwdate - INFO - Pool 2: ltcpool5brio2gaj.onion, web unixtime: 1596666950, web time: Wed Aug 05 22:35:50 UTC 2020, diff: 11 seconds
Aug 05 22:35:39 host sdwdate[18010]: 2020-08-05 22:35:39 - sdwdate - INFO - Pool 3: h2qkxasmmqdmyiov.onion, web unixtime: 1596666937, web time: Wed Aug 05 22:35:37 UTC 2020, diff: -2 seconds
Aug 05 22:35:39 host sdwdate[18010]: 2020-08-05 22:35:39 - sdwdate - INFO - End fetching remote times.
Aug 05 22:35:39 host sdwdate[18010]: 2020-08-05 22:35:39 - sdwdate - INFO - Pool differences, sorted: [-35, -2, 11]
Aug 05 22:35:39 host sdwdate[18010]: 2020-08-05 22:35:39 - sdwdate - INFO - Median time difference: -2.000000000
Aug 05 22:35:39 host sdwdate[18010]: 2020-08-05 22:35:39 - sdwdate - INFO - randomize             : -0.983622351
Aug 05 22:35:39 host sdwdate[18010]: 2020-08-05 22:35:39 - sdwdate - INFO - New time difference   : -2.983622351
Aug 05 22:35:39 host sdwdate[18010]: 2020-08-05 22:35:39 - sdwdate - INFO - Old unixttime: 1596666939.935923100
Aug 05 22:35:39 host sdwdate[18010]: 2020-08-05 22:35:39 - sdwdate - INFO - New unixtime : 1596666936.952300787
Aug 05 22:35:39 host sdwdate[18010]: 2020-08-05 22:35:39 - sdwdate - INFO - Instantly setting the time by using command: /bin/date --set "@1596666936.952300787"
Aug 05 22:35:36 host sdwdate[18010]: 2020-08-05 22:35:36 - sdwdate - INFO - /bin/date output: Wed 05 Aug 2020 10:35:36 PM UTC
Aug 05 22:35:36 host sdwdate[18010]:  
Aug 05 22:35:36 host firewall-restarter[721]: + '[' /run/sdwdate/first_success = /run/sdwdate/first_success ']'
Aug 05 22:35:36 host firewall-restarter[721]: + systemctl --no-pager --no-block status whonix-firewall
Aug 05 22:35:36 host sdwdate[18010]: 2020-08-05 22:35:36 - sdwdate - INFO - Success. Sleeping for 157.9 minutes.
Aug 05 22:35:36 host sdwdate[18010]: 2020-08-05 22:35:36 - sdwdate - INFO - Running command: sleep 9474.600957450
Aug 05 22:35:36 host firewall-restarter[721]: ● whonix-firewall.service - Whonix firewall loader
Aug 05 22:35:36 host firewall-restarter[721]:    Loaded: loaded (/lib/systemd/system/whonix-firewall.service; enabled; vendor preset: enabled)
Aug 05 22:35:36 host firewall-restarter[721]:    Active: active (exited) since Wed 2020-08-05 22:29:06 UTC; 6min ago
Aug 05 22:35:36 host firewall-restarter[721]:      Docs: https://www.whonix.org/wiki/Whonix_Firewall
Aug 05 22:35:36 host firewall-restarter[721]:   Process: 17727 ExecStart=/usr/lib/whonix-firewall/enable-firewall (code=exited, status=0/SUCCESS)
Aug 05 22:35:36 host firewall-restarter[721]:  Main PID: 17727 (code=exited, status=0/SUCCESS)
Aug 05 22:35:36 host firewall-restarter[721]: Aug 05 22:29:06 host systemd[1]: Starting Whonix firewall loader...
Aug 05 22:35:36 host firewall-restarter[721]: Aug 05 22:29:06 host enable-firewall[17727]: 2020-08-05 22:29:06 - /usr/bin/whonix-gateway-firewall - OK: Loading Whonix firewall...
Aug 05 22:35:36 host firewall-restarter[721]: Aug 05 22:29:06 host enable-firewall[17727]: 2020-08-05 22:29:06 - /usr/bin/whonix-gateway-firewall - OK: Skipping firewall mode detection since already set to 'full'.
Aug 05 22:35:36 host firewall-restarter[721]: Aug 05 22:29:06 host enable-firewall[17727]: 2020-08-05 22:29:06 - /usr/bin/whonix-gateway-firewall - OK: (Full torified network access allowed.)
Aug 05 22:35:36 host firewall-restarter[721]: Aug 05 22:29:06 host enable-firewall[17727]: 2020-08-05 22:29:06 - /usr/bin/whonix-gateway-firewall - OK: Whonix firewall loaded.
Aug 05 22:35:36 host firewall-restarter[721]: Aug 05 22:29:06 host systemd[1]: Started Whonix firewall loader.
Aug 05 22:35:36 host firewall-restarter[721]: + systemctl --no-pager --no-block restart whonix-firewall
Aug 05 22:35:36 host audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=qubes-whonix-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Aug 05 22:35:36 host systemd[1]: qubes-whonix-network.service: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: https://www.debian.org/support
-- 
-- The unit qubes-whonix-network.service has successfully entered the 'dead' state.
Aug 05 22:35:36 host systemd[1]: Stopped sets up Qubes-Whonix-Gateway external and internal network interface.
-- Subject: A stop job for unit qubes-whonix-network.service has finished
-- Defined-By: systemd
-- Support: https://www.debian.org/support
-- 
-- A stop job for unit qubes-whonix-network.service has finished.
-- 
-- The job identifier is 2437 and the job result is done.
Aug 05 22:35:36 host systemd[1]: Stopping sets up Qubes-Whonix-Gateway external and internal network interface...
-- Subject: A stop job for unit qubes-whonix-network.service has begun execution
-- Defined-By: systemd
-- Support: https://www.debian.org/support
-- 
-- A stop job for unit qubes-whonix-network.service has begun execution.
-- 
-- The job identifier is 2437.
Aug 05 22:35:36 host systemd[1]: Stopping Tor control port filter proxy...
-- Subject: A stop job for unit onion-grater.service has begun execution
-- Defined-By: systemd
-- Support: https://www.debian.org/support
-- 
-- A stop job for unit onion-grater.service has begun execution.
-- 
-- The job identifier is 2378.
Aug 05 22:35:36 host firewall-restarter[721]: + read file_name
Aug 05 22:35:36 host systemd[1]: onion-grater.service: Main process exited, code=killed, status=15/TERM
-- Subject: Unit process exited
-- Defined-By: systemd
-- Support: https://www.debian.org/support
-- 
-- An ExecStart= process belonging to unit onion-grater.service has exited.
-- 
-- The process' exit code is 'killed' and its exit status is 15.
Aug 05 22:35:36 host systemd[1]: onion-grater.service: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: https://www.debian.org/support
-- 
-- The unit onion-grater.service has successfully entered the 'dead' state.
Aug 05 22:35:36 host audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=onion-grater comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Aug 05 22:35:36 host audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=whonix-firewall comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Aug 05 22:35:36 host systemd[1]: Stopped Tor control port filter proxy.
-- Subject: A stop job for unit onion-grater.service has finished
-- Defined-By: systemd
-- Support: https://www.debian.org/support
-- 
-- A stop job for unit onion-grater.service has finished.
-- 
-- The job identifier is 2378 and the job result is done.
Aug 05 22:35:36 host systemd[1]: whonix-firewall.service: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: https://www.debian.org/support
-- 
-- The unit whonix-firewall.service has successfully entered the 'dead' state.
Aug 05 22:35:36 host systemd[1]: Stopped Whonix firewall loader.
-- Subject: A stop job for unit whonix-firewall.service has finished
-- Defined-By: systemd
-- Support: https://www.debian.org/support
-- 
-- A stop job for unit whonix-firewall.service has finished.
-- 
-- The job identifier is 2373 and the job result is done.
Aug 05 22:35:36 host systemd[1]: Stopping Whonix firewall loader...
-- Subject: A stop job for unit whonix-firewall.service has begun execution
-- Defined-By: systemd
-- Support: https://www.debian.org/support
-- 
-- A stop job for unit whonix-firewall.service has begun execution.
-- 
-- The job identifier is 2373.
Aug 05 22:35:36 host systemd[1]: Starting Whonix firewall loader...
-- Subject: A start job for unit whonix-firewall.service has begun execution
-- Defined-By: systemd
-- Support: https://www.debian.org/support
-- 
-- A start job for unit whonix-firewall.service has begun execution.
-- 
-- The job identifier is 2373.
Aug 05 22:35:37 host enable-firewall[18854]: 2020-08-05 22:35:37 - /usr/bin/whonix-gateway-firewall - OK: Loading Whonix firewall...
Aug 05 22:35:37 host enable-firewall[18854]: 2020-08-05 22:35:37 - /usr/bin/whonix-gateway-firewall - OK: Skipping firewall mode detection since already set to 'full'.
Aug 05 22:35:37 host enable-firewall[18854]: 2020-08-05 22:35:37 - /usr/bin/whonix-gateway-firewall - OK: (Full torified network access allowed.)
Aug 05 22:35:37 host enable-firewall[18854]: 2020-08-05 22:35:37 - /usr/bin/whonix-gateway-firewall - OK: Whonix firewall loaded.
Aug 05 22:35:37 host audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=whonix-firewall comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Aug 05 22:35:37 host systemd[1]: Started Whonix firewall loader.
-- Subject: A start job for unit whonix-firewall.service has finished successfully
-- Defined-By: systemd
-- Support: https://www.debian.org/support
-- 
-- A start job for unit whonix-firewall.service has finished successfully.
-- 
-- The job identifier is 2373.
Aug 05 22:35:37 host systemd[1]: Starting sets up Qubes-Whonix-Gateway external and internal network interface...
-- Subject: A start job for unit qubes-whonix-network.service has begun execution
-- Defined-By: systemd
-- Support: https://www.debian.org/support
-- 
-- A start job for unit qubes-whonix-network.service has begun execution.
-- 
-- The job identifier is 2437.
Aug 05 22:35:37 host network-proxy-setup[19008]: + source /usr/lib/qubes-whonix/utility_functions.sh
Aug 05 22:35:37 host network-proxy-setup[19008]: ++ systemctl -q is-active qubes-updates-proxy-forwarder.socket
Aug 05 22:35:37 host systemd[1]: Starting Tor control port filter proxy...
-- Subject: A start job for unit onion-grater.service has begun execution
-- Defined-By: systemd
-- Support: https://www.debian.org/support
-- 
-- A start job for unit onion-grater.service has begun execution.
-- 
-- The job identifier is 2378.
Aug 05 22:35:37 host network-proxy-setup[19008]: ++ PROXY_SERVER=http://10.137.255.254:8082/
Aug 05 22:35:37 host network-proxy-setup[19008]: ++ PROXY_META='<meta name="application-name" content="tor proxy"/>'
Aug 05 22:35:37 host network-proxy-setup[19008]: ++ qubesdb-read /qubes-vm-type
Aug 05 22:35:37 host network-proxy-setup[19008]: + qubes_vm_type=ProxyVM
Aug 05 22:35:37 host network-proxy-setup[19008]: + '[' ProxyVM = TemplateVM ']'
Aug 05 22:35:37 host network-proxy-setup[19008]: + '[' '!' -e /usr/share/anon-gw-base-files/gateway ']'
Aug 05 22:35:37 host network-proxy-setup[19008]: + /sbin/ethtool -K eth0 sg off
Aug 05 22:35:37 host network-proxy-setup[19008]: + INTERFACE=eth1
Aug 05 22:35:37 host network-proxy-setup[19008]: + ip link show eth1
Aug 05 22:35:37 host network-proxy-setup[19008]: 5: eth1: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
Aug 05 22:35:37 host network-proxy-setup[19008]:     link/ether 7a:6d:47:53:f9:2c brd ff:ff:ff:ff:ff:ff
Aug 05 22:35:37 host network-proxy-setup[19008]: + exit 0
Aug 05 22:35:37 host systemd[1]: Started sets up Qubes-Whonix-Gateway external and internal network interface.
-- Subject: A start job for unit qubes-whonix-network.service has finished successfully
-- Defined-By: systemd
-- Support: https://www.debian.org/support
-- 
-- A start job for unit qubes-whonix-network.service has finished successfully.
-- 
-- The job identifier is 2437.
Aug 05 22:35:37 host audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=qubes-whonix-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Aug 05 22:35:37 host onion-grater[19014]: Tor control port filter started, listening on 10.137.0.69:9051
Aug 05 22:35:37 host onion-grater[19014]: Serving Thread started
Aug 05 22:35:38 host systemd[1]: Started Tor control port filter proxy.
-- Subject: A start job for unit onion-grater.service has finished successfully
-- Defined-By: systemd
-- Support: https://www.debian.org/support
-- 
-- A start job for unit onion-grater.service has finished successfully.
-- 
-- The job identifier is 2378.

So that’s another branch to contend with.

Interestingly, the systemctl restart sdwdate method does not appear to trigger the bug in Tor Browser!

Note: It appears sclockadj is started when the gateway boots. If sclockadj is already running when you systemctl restart sdwdate, sdwdate will invoke sclockadj instead of date:

sdwdate - INFO - Gradually adjusting the time by running sclockadj using command: /usr/lib/sdwdate/sclockadj "41800313641"

But, when sdwdate is restarted via the gui, it seems that sclockadj is killed at some point, and therefore it’s not running when sdwdate starts back up

sdwdate - INFO - sclockadj not running, ok.

… which causes sdwdate to use date instead of sclockadj

sdwdate - INFO - Instantly setting the time by using command: /bin/date --set "@1596671422.487272263"

(In other words: any time you restart sdwdate via the gui, from then on it will use date instead of sclockadj.)

So that’s yet another branch to contend with.

Originally I thought that might be important, however from the couple of tests I did, it does not seem to matter whether sdwdate invokes date or sclockadj. In either case, systemctl restart sdwdate does not seem to trigger the bug for me, like it does from the gui.

This would suggest it’s the restart of onion-grater that’s triggering the bug. However that contradicts your results where simply running date in sys-whonix was enough to trigger the bug, as that shouldn’t have anything to do with onion-grater nor sdwdate, right? I’m confused now.

So to recap:
What does trigger the bug:

  • Killing or restarting the Tor process (even when Tor is on the same machine as the browser, as in Tor Browser Bundle).
  • Killing or restarting systemd-socket-proxyd in anon-whonix
  • Killing or restarting onion-grater in sys-whonix
  • Restarting sdwdate via the gui in sys-whonix
  • Killing the sys-whonix VM
  • Or, basically, anything that breaks connection chain from the browser to the control port.

What does not trigger the bug:

  • systemctl restart sdwdate in sys-whonix, and sdwdate invokes sclockadj
  • systemctl restart sdwdate in sys-whonix, and sdwdate invokes date

@Patrick: Could you try this?

  1. In sys-whonix, or anon-whonix for that matter, use tor-prompt -i 10.137.x.x:9051 or similar to connect to onion-grater. Leave it open.
  2. Trigger the bug in Tor Browser using date in another terminal.
  3. Switch back to the tor-prompt window and type a command to see if the connection is still alive.

I still can’t trigger the bug by just invoking date in sys-whonix, for example using date --set @$(( $(date +%s) + 30 )). Can you clarify how you did so in more detail?

2 Likes

date vs onion-grater: date was used to reproduce this on Non-Qubes-Whonix. There might be two different bugs. Maybe it “fixed itself” meanwhile. I cannot reproduce this any longer. Let’s concentrate on the onion-grater related issue then.


To also cut out sdwdate-gui from the issue most likely what’s happening… sdwdate-gui is actually calling this:

/usr/lib/sdwdate/restart_fresh

sudo bash -x /usr/lib/sdwdate/restart_fresh
+ set -e
+ sudo --non-interactive rm --force /run/sdwdate/first_success
+ sudo --non-interactive -u sdwdate touch /run/sdwdate/clock_jump_do_once
+ sudo --non-interactive systemctl --no-pager --no-block restart sdwdate

If you grep Whonix source code for first_success you’ll see /usr/lib/whonix-firewall/firewall-restarter. It restarts whonix-firewall systemd service.

Maybe deleting /run/sdwdate/first_success is unnecessary and wrong. Seems non-perfect anyhow.

sdwdate: The idea is when you restart it from sdwdate-gui it should instantly fix the clock. Otherwise if restarted by systemd unit it might just be during an upgrade and then clock jumps would be undesireable, hence gradually adjust clock using sclockadj. And after boot: set clock using date (to account for possibly required bigger fixes). Using date/instant time fix: when necessary. Using sclockadj: whenever sensible.

One of these might be causing onion-grater to restart when whonix-firewall systemd service is restarting:

  • /lib/systemd/system/qubes-whonix-network.service
  • /lib/systemd/system/onion-grater.service.d/30_whonix_cpfpy.conf

Since my sys-whonix is based on whonix-gw-15 TemplateVM it’s easy to experiment. Deleting some files in root file system which will be back after reboot. After doing this…

sudo rm /lib/systemd/system/onion-grater.service.d/30_whonix_cpfpy.conf
sudo systemctl daemon-reload 
sudo systemctl restart whonix-firewall

…onion-grater no longer restarts when restarting whonix-firewall.

Can we implement the functionality of /lib/systemd/system/onion-grater.service.d/30_whonix_cpfpy.conf any other way?

https://github.com/Whonix/whonix-firewall/blob/master/lib/systemd/system/onion-grater.service.d/30_whonix_cpfpy.conf

I’ll guess I’ll just delete that file. It’s nice to have but not too important after all. Perhaps replace it with onion-grater trying at infinity to restart every 1 minute or so (not faster to not spam systemd log too much - even every 1 minute is awful systemd log spam).

Great analysis! Probably correct.

Could you report this upstream at The Tor Project please? I recommend to not mention sdwdate / onion-grater since that would add confusion. The bug in Tor Browser here is spinning up 100% CPU once Tor or control port connection dies.

That explanation makes sense, but I’m not sure it exactly matches what I observed. Specifically, I did observe it invoking date after a systemctl restart, under certain conditions. However, as long as we’re in agreement that sclockadj vs. date is irrelevant, I’m not going wrack my brain over it.

  • Maybe make onion-grater exclusively socket activated? That way it will never be requested to activate if the interface doesn’t exist.
  • Change to Requires=network.target After=network.target? Usually a firewall does not add or delete any interfaces, so I don’t think it makes sense for onion-grater to depend on it in this case.
  • And/or: add Restart=on-failure RestartSec=10s, or something like that?

I intend to, probably today. I was just waiting until I collected as much information as possible.

As long as we can conceivably come up with a workaround, I won’t mention Whonix or sdwdate. On the other hand if we’re relying on upstream for a fix, I think it’s in our best interest to demonstrate a real-world, recurring problem case. Otherwise they’ll likely say “well normally the control connection should never die while the browser is still connected so this is a low priority bug.”

1 Like

Good options to explore.

The Linux OOM killer kills processes at random. If Linux is trying to rescue the system by killing the Tor process, it’s awful if the system crashes anyhow because Firefox starts 100% CPU bug.

Still waiting for my gitlab.torproject.org account to be approved. (The new walled garden approach is doing them a big favor isn’t it?) However, when I was poking around, believe it or not, I found this was already reported to Tor Project over a year ago. Man, things are much easier to find when you’re not looking for them.
.
It’s much higher level than I thought. They found out it’s in JS code related to the circuit display feature in Tor Launcher. I overlooked that, because about:performance kept saying everything was fine when it wasn’t, but Tor Launcher apparently doesn’t show up there even when it’s running. According to the thread, disabling the circuit display feature in about:config gets rid of the bug. I gave it a try and it seems to work like a charm. It should be fine to disable that since the circuit display isn’t supposed to work anyway on Whonix. Maybe we could distribute a fix the same way the security level confirmation dialog box works? Just brainstorming at the moment.

extensions.torbutton.display_circuit to false

2 Likes

That doesn’t exist anymore.
add Tor Browser first startup popup to ask whether security slider should be set to safest - #23 by Patrick

Two disadvantages with inside browser data folder modifications by Whonix

  • breaks easily
  • most Tor Browser issues are blamed on it whatsoever

Therefore not sustainable.

Great find. Probably great if users manually apply that workaround. But I cannot make that the default due to above issue unfortunately.

Btw we actually want to make it work but with most information redacted.

For purposes of not confusing Tor Launcher / Tor Browser.

This might be breaking Tor onion v3 authentication from inside Tor Browser. Reference:
https://phabricator.whonix.org/T996

I understand completely.

It could also introduce fingerprintability. I doubt it would in this instance, but generally.

For AppVMs it’s easy and effective. It should be possible in DispVM templates, though it would require extracting possibly running Tor Browser in the DispVM template.

The bigger issue is discoverability. I added it to the wiki, but the majority of affected users may not know what to look for or where.

Fortunately, this workaround is really not even necessary in the future if we can sync the clock without breaking the control connection, as discussed earlier. It would also be unnecessary when and if the upstream bug is fixed.

@Patrick BTW, thanks for your support. I probably wouldn’t have pursued the issue any further if this thread had gone unreplied.

1 Like