Tor connection dies randomly and needs manual restart

Tor connection in Whonix-Gateway dies and when it dies, it doesn’t recover but the /var/run/tor/log says:

[NOTICE] Tried for 100 seconds to get a connection to [scrubbed]:[xxx]. Giving up.

It’s probably all selected guard nodes are gone and Tor is not picking new ones. The only solution is to restart Tor manually.

sudo systemctl restart tor@default

After restarting Tor the connection is up again.

Is it safe to make cron job that detects Tor connectivity and restarts Tor daemon? Has Whonix any built-in tool for checking that?

Or alternatively, can you configure Tor to pick another guard nodes when they’re off for large amount of time? How is it insecure?

The second issue, /var/run/tor/log grows up and when tmpfs goes to 100%, Tor connection breaks and the only way to restart Whonix-Gateway or (if still possible) stop Tor, delete this file and start again.

sudo systemctl stop tor@default
sudo rm /var/run/tor/log
sudo systemctl start tor@default

Most logged infomation are useless. Where can I change logging level?

Could be a general Tor network issue. An active attack on specific Tor users is also conceivable as part of searching for a specific user. If traffic keeps getting interrupted at one side, it also gets corrupted at the destination. Correlate.


I recommend:

One issue = one forum thread please.
Please create a separate forum thread for that.

This issue may occur when host or guest system runs out of RAM and needs swapping. This needs further examination whether this may cause error “Tried for X seconds to get a connection to [scrubbed]”. It looks like the entry node fails circuits but my experiment with multiple Whonix-Workstation machines running (some software need a lot of RAM so I increased RAM in VM settings to 1.5 GB while Whonix-Gateway remained still only 384 MB) Tor connection was almost dead due to failing circuits.

If this is true, the solution is to add more RAM to physical machine.

1 Like

Problem still exists. Example errors:

[warn] Tried for 120 seconds to get a connection to [scrubbed]:0. Giving up. (waiting for circuit)
[warn] Tried for 120 seconds to get a connection to [scrubbed]:0. Giving up. (waiting for circuit)
[warn] Tried for 120 seconds to get a connection to [scrubbed]:0. Giving up. (waiting for circuit)

[warn] Guard [guard name] (guard ID) is failing an extremely large amount of circuits. This could indicate a route manipulation attack, extreme network overload, or a bug. Success counts are 71/243. Use counts are 56/56. 70 circuits completed, 0 were unusable, 0 collapsed, and 10 timed out. For reference, your timeout cutoff is 60 seconds.

[notice] Closed 1 streams for service [scrubbed].onion for reason resolve failed. Fetch status: No more HSDi available to query.

[notice] No circuits are opened. Relaxed timeout for circuit 1945 (a General-purpose client 3-hop circuit state waiting to see how other guards perform with channel state open) to 85410ms. However, it appears the circuit has timed out anyway.

[warn] Giving up on launching a rendezvous circuit to [scrubbed] for hidden service [scrubbed]

This usually happens when host system is under heavier load than when idle. Anyone else experienced this issue?

Turning off Tor and deleting state file helps (another guard is picked) but this is very bad solution for security reasons.

Probably vanguards are closing circuits. Changing configuration of vanguards /etc/tor/anon-vanguards.conf would help but which of these configuration flags can be safely changed without lowering anonymity?

Also, which config options to change to examine this problem better? To solve this issue, you would need more what exactly is going on on the system.

Probably Generic Bug Reproduction required. Specifically Tor Generic Bug Reproduction.

The same would probably happen if Tor was installed on the host operating system. Could you please test iof that is the case?

If so, that would be unspecific to Whonix.

[Imprint] [Privacy Policy] [Cookie Policy] [Terms of Use] [E-Sign Consent] [DMCA] [Contributors] [Investors] [Priority Support] [Professional Support]