Ah - that’s a pity user defined rules must be used.
I thought this would be a simple (?) fix server-side. It goes contrary to user expectations i.e. most would think - “I’m connecting to the Whonix v2 or v3 onion, so why would it load resources from the clearnet address…”
Fair enough. I appreciate the https trick though. I have now used that same method on many other sites like Protonmail, Bitmessage.ch, RiseUp, and a dozen other websites that have hidden service mirrors. So I appreciate the tip.
Good to hear.
Patrick I am certain you’ll find a more elegant solution soon enough.
Is there any possibility of Namecoin playing a role in this issue? Or is it another issue entirely? Or a non-issue due to some shortcoming?
I doubt that. It’s not a DNS issue. The problem is that webapps we are
using such as mediaiwki, wordpress, discourse are using our primary
https domain. These webapps don’t support multiple domain for the same
website. We don’t have the resources to provide patches to these
projects to add this feature.
Give relays some defenses against the recent network overload. We start with three defenses (default parameters in parentheses).
First: if a single client address makes too many connections (>100), hang up on further connections.
Second: if a single client address makes circuits too quickly (more than 3 per second, with an allowed burst of 90) while also having too many connections open (3), refuse new create cells for the next while (1-2 hours).
Third: if a client asks to establish a rendezvous point to you directly, ignore the request. These defenses can be manually controlled by new torrc options, but relays will also take guidance from consensus parameters, so there’s no need to configure anything manually. Implements ticket 24902.
Actually it has been merged into 0.2.9.x and onwards, see:
Why would the server successfully allow connections e.g. forums.kkkkkkkkkk63ava6.onion, but fail with forums.dds6qkxpwdeubwucdiaord2xgbbeyds25rbsgr73tbfpqpt4a6vjwsyd.onion at the same time (despite multiple attempts with new circuits)?
Firefox can’t establish a connection to the server at forums.dds6qkxpwdeubwucdiaord2xgbbeyds25rbsgr73tbfpqpt4a6vjwsyd.onion.
Like just now.
Misconfiguration? Server overload? Not running latest Tor stable thus subject to an attack of some sort?
Interested users would like to know in terms of a stable server response, since it seems illogical that one would work (v2), while the other would not (v3) at the same time with latest Tor client software running (Tor Browser 7.5.2, Tor 3.2.10). Particularly since it worked just yesterday…
We’re on latest stable. I looked at the logs, and according to this Tor Project ticket it looks like the entire network is under stress or load of some kind. Not much we can do on our end, I don’t think. I haven’t touched the configs before this started happening, unfortunately.
dds6qkxpwdeubwucdiaord2xgbbeyds25rbsgr73tbfpqpt4a6vjwsyd.onion is up and functional for me. I haven’t touched the machine since yesterday, and it came up on its own. I am inclined to believe it was simply congestion / stress of the whole Tor network. I believe hidden services form unique circuits, and it just so happened that the main v3 hidden service couldn’t find a reliable circuit, while the other hidden services could.
With v2 and v3 Whonix onions down for 3 days, there is some serious problem going on.
Tor metrics shows the number of “users” on the network has halved since Tor v3.2.10 was released, meaning the DDOS stuff is working in general (Germany alone lost around a million “users” over several days).
However, I see some tor ticket referred to too many circuit connections being opened (both client and server side?) which might be being exploited somehow to cause a permanent takedown effect we are seeing.
As a test, I checked various other v3 onions to see if they connect:
If it is a network wide attack, I would expect other random v3 onions to fail at least once. That’s not the case.
Therefore, since you haven’t changed the configuration, it is most likely:
a) Targeted attack - meaning there is a strong rationale for a Whonix mirror or server with more capacity (?)
b) Some misconfiguration that wasn’t picked up previously, but is now being exploited.
c) Something else.