make apt-get whonix.org onion updates more reliable
Disadvantages:
According to below except from Tor manual, this can never be undone. So if we later want to keep the onion domain and to hide the IP for anti-DDOS reasons it woudnât work and weâd have to generate a new onion.
Less used than regular hidden services, so there might be exclusive bugs.
Experimental - Non Anonymous Hidden Services on a tor instance in HiddenServiceSingleHopMode make one-hop (direct) circuits between the onion service server, and the introduction and rendezvous points. (Onion service descriptors are still posted using 3-hop paths, to avoid onion service directories blocking the service.) This option makes every hidden service instance hosted by a tor instance a Single Onion Service. One-hop circuits make Single Onion servers easily locatable, but clients remain location-anonymous. However, the fact that a client is accessing a Single Onion rather than a Hidden Service may be statistically distinguishable.
WARNING: Once a hidden service directory has been used by a tor instance in HiddenServiceSingleHopMode, it can NEVER be used again for a hidden service. It is best practice to create a new hidden service directory, key, and address for each new Single Onion Service and Hidden Service. It is not possible to run Single Onion Services and Hidden Services from the same tor instance: they should be run on different servers with different IP addresses.
HiddenServiceSingleHopMode requires HiddenServiceNonAnonymousMode to be set to 1. Since a Single Onion service is non-anonymous, you can not configure a SOCKSPort on a tor instance that is running in HiddenServiceSingleHopMode. Can not be changed while tor is running. (Default: 0)
HiddenServiceNonAnonymousMode 0|1
Makes hidden services non-anonymous on this tor instance. Allows the non-anonymous HiddenServiceSingleHopMode. Enables direct connections in the server-side hidden service protocol. If you are using this option, you need to disable all client-side services on your Tor instance, including setting SOCKSPort to â0â. Can not be changed while tor is running. (Default: 0)
I would say keep it as is it now unless it gets to the point where the forum is flooded with support requests. I would guess requests will increase for a little while but it should tapper off.
Why does the fate of the user depend on the configuration of the server? That shouldnât be in the power of the server.
Why is there more ddos risk in SingleHopMode?
What do you mean by intersection attack?
I am not arguing we should use SingleHopMode right now, but I would like to pinpoint this issue to a ticket on trac.torproject.org.
It happens such tickets are forgotten to be created and therefore never fixed upstream. timestamp leaked in TLS client hello (#7277) ¡ Issues ¡ Legacy / Trac ¡ GitLab is an example that demonstrates that. I was merely reading another ticket and found a comment which mentioned an issue which hadnât made it into its own ticket so wouldnât ever have been addressed.
I think they meant malicious Tor nodes and not the destination server. Since its a freshly introduced feature and hasnât been as well studied it carries this major disclaimer.
Because you are weakening Torâs protection for your server by reducing the hops.
Distinguishing between different users because of differences in the way they use the network. In this case malicious nodes able to tell apart regular vs singlehop user traffic and exploiting that difference to unmask users.
Agreed. A ticket that tracks the problem and potential fixes would be useful.
Please note the data on the stage server is old, it was restored from backup some time ago in order to test some software upgrades. We donât really need up to date data on the stage server (although we can easily restore nightly backups there).
Just posting it in case anyone wants to test the performance of http://uc7y4xbecvp75lgexvgrm2jb37w3riingoams4mmwh3mlztwn4ca2fad.onion under this Single Hop mode. Personally I donât notice much difference. Of course, itâs not a fair test (the production server is a lot busier than the stage server, they arenât built to the same specification, and they are located in different geography).
On the other hand, under this mode maybe itâs not intended to be âfasterâ to load the site but more reliable to consistently load it.
Iâd love to see some regular (very frequent) back-up of back-up feature implemented so that if there is some catastrophic failure on main machine(s), weâre not FUBAR i.e. media-wiki backup and whatever else is critical. Analogy is encrypted stuff: always store at least 2 copies so youâre not stuffed if one goes down.
The real performance test would be singlehopmode as a main site, with tons of people doing apt-get and hitting Whonix packages for downloads etc. Anyway, that single hope mode test is very snappy with the wiki.
But as you say, hard to know with a server receiving little traffic. Hopefully your network will know more.
Yes, thatâs what I meant to say (probably wasnât clear). The stage server is also the receiver of our nightly backups, they just go to a dedicated encrypted disk, they donât sync directly into the location that would serve the files. We then manually sync to the âservingâ disk when we feel like testing new stuff.
But rest assured we have a nightly sync of all the apps and databases, even the Qubes mirror (a mirror of a mirror, yo dawgâŚ)
Yes. In addition to the nightly sync, we then have offsite incremental snapshots of the backup disk itself, so we can restore back in time some days/weeks, and to a totally new server if need be (itâs not tied to the stage server, if that gets hosed). Also means that if the production data is corrupted, and then syncing to stage corrupts the stage backup, we have the previous dayâs snapshot, or days before that. Backups of our backups.
All this is new since I came on board. Donât worry, been doing this sort of thing a long time