make apt-get whonix.org onion updates more reliable
According to below except from Tor manual, this can never be undone. So if we later want to keep the onion domain and to hide the IP for anti-DDOS reasons it woudn’t work and we’d have to generate a new onion.
Less used than regular hidden services, so there might be exclusive bugs.
Experimental - Non Anonymous Hidden Services on a tor instance in HiddenServiceSingleHopMode make one-hop (direct) circuits between the onion service server, and the introduction and rendezvous points. (Onion service descriptors are still posted using 3-hop paths, to avoid onion service directories blocking the service.) This option makes every hidden service instance hosted by a tor instance a Single Onion Service. One-hop circuits make Single Onion servers easily locatable, but clients remain location-anonymous. However, the fact that a client is accessing a Single Onion rather than a Hidden Service may be statistically distinguishable.
WARNING: Once a hidden service directory has been used by a tor instance in HiddenServiceSingleHopMode, it can NEVER be used again for a hidden service. It is best practice to create a new hidden service directory, key, and address for each new Single Onion Service and Hidden Service. It is not possible to run Single Onion Services and Hidden Services from the same tor instance: they should be run on different servers with different IP addresses.
HiddenServiceSingleHopMode requires HiddenServiceNonAnonymousMode to be set to 1. Since a Single Onion service is non-anonymous, you can not configure a SOCKSPort on a tor instance that is running in HiddenServiceSingleHopMode. Can not be changed while tor is running. (Default: 0)
Makes hidden services non-anonymous on this tor instance. Allows the non-anonymous HiddenServiceSingleHopMode. Enables direct connections in the server-side hidden service protocol. If you are using this option, you need to disable all client-side services on your Tor instance, including setting SOCKSPort to “0”. Can not be changed while tor is running. (Default: 0)
I think they meant malicious Tor nodes and not the destination server. Since its a freshly introduced feature and hasn’t been as well studied it carries this major disclaimer.
Because you are weakening Tor’s protection for your server by reducing the hops.
Distinguishing between different users because of differences in the way they use the network. In this case malicious nodes able to tell apart regular vs singlehop user traffic and exploiting that difference to unmask users.
Agreed. A ticket that tracks the problem and potential fixes would be useful.
Please note the data on the stage server is old, it was restored from backup some time ago in order to test some software upgrades. We don’t really need up to date data on the stage server (although we can easily restore nightly backups there).
Just posting it in case anyone wants to test the performance of http://uc7y4xbecvp75lgexvgrm2jb37w3riingoams4mmwh3mlztwn4ca2fad.onion under this Single Hop mode. Personally I don’t notice much difference. Of course, it’s not a fair test (the production server is a lot busier than the stage server, they aren’t built to the same specification, and they are located in different geography).
On the other hand, under this mode maybe it’s not intended to be ‘faster’ to load the site but more reliable to consistently load it.
I’d love to see some regular (very frequent) back-up of back-up feature implemented so that if there is some catastrophic failure on main machine(s), we’re not FUBAR i.e. media-wiki backup and whatever else is critical. Analogy is encrypted stuff: always store at least 2 copies so you’re not stuffed if one goes down.
The real performance test would be singlehopmode as a main site, with tons of people doing apt-get and hitting Whonix packages for downloads etc. Anyway, that single hope mode test is very snappy with the wiki.
But as you say, hard to know with a server receiving little traffic. Hopefully your network will know more.
Yes, that’s what I meant to say (probably wasn’t clear). The stage server is also the receiver of our nightly backups, they just go to a dedicated encrypted disk, they don’t sync directly into the location that would serve the files. We then manually sync to the ‘serving’ disk when we feel like testing new stuff.
But rest assured we have a nightly sync of all the apps and databases, even the Qubes mirror (a mirror of a mirror, yo dawg…)
Yes. In addition to the nightly sync, we then have offsite incremental snapshots of the backup disk itself, so we can restore back in time some days/weeks, and to a totally new server if need be (it’s not tied to the stage server, if that gets hosed). Also means that if the production data is corrupted, and then syncing to stage corrupts the stage backup, we have the previous day’s snapshot, or days before that. Backups of our backups.
All this is new since I came on board. Don’t worry, been doing this sort of thing a long time