Relatively recent news suggests that the odds of an attack against a hidden service becomes 1/1 given enough time.
Has anyone any experience, or indeed thoughts, about using an i2p connection to connect the whonix-gateway and whonix-workstation/server?
This would mean that even if the gateway was physically compromised the only (!!) information lost would be the onion private keys - as it would be theoretically mathematically impossible to resolve the i2p address to an ip address.
Were did you find that? Was unable to find anything on such a disastrous hole. Could you send a link please? Also, what kind of attack? A DOS/DDOS? More information would be necessary to say anything about this.
With apologies I think I have not been clear in my question.
I’ll tackle your first question last, if that’s OK
No - but please correct me if I have missed this - I’m suggesting the physical isolation of a whonix-gateway from a whonix-workstation - or - more importantly a whonix-server, which contains all the data - using an i2p ssh proxy.
No - I’m suggesting linking gateway (containing keys) with server/workstation containing data - using an i2p ssh/vpn connection. If the gateway is compromised, the server/workstation is physically elsewhere.
Can we say yes for now, so you don’t flame me for such an inflammatory statement at the very start of the post - I will find the document and post the link … but can we agree that a determined opponent (government) will over enough time uncover a hidden service server?
Tunneling the whole connection over I2P via SSH as you suggest would in this case only make the surface of attack bigger as opposed to leaving the connection between GW and WS local, as if the WS and GW are in different (not locally connected) networks they’d both need additional connections to I2P to access eachother effectivley increasing the amount of IPs exposed in case physical access is gained…
Like I said, not necessarily safer, actually increases the surface of attack. The less local control there is over what is used, the less safe your configuration is. If the GW and WS aren’t locally present in the same network, using them isn’t possible safely.
The wasn’t flaming, however without knowning what the problem is, solutions can’t be found. Most of the time when someone carrys something like this to the surface, it either is very old and already fixed, based on missinformation/not properly done studys or is nothing for Whonix to fix. If the problem is as big as you say it is, this would be something the core Tor developers would have to fix in every iteration of Tor. Also again, what kind of attack are we talking about? You can’t talk about a 1:1 possibility and then about physical access, because those two things contradict eachother.
Define uncover. Finding out where a hidden service hosted over a Whonix based system is located has not been proven to be possible by anyone. Also, why did you then write about physical access, deanonymizing/uncovering a hidden services real location/IP without physical access is something at the moment not possible.
If you have physical access to the workstation, even if you could seperate the two as you proposed to let’s say different servers in different countrys, would mean the real location of the GW could be found, simply by following what is on the WS, as those two would have a fixed point of connection to work properly. Also, to give the WS access to the GW over I2P via SSH, the WS would need a seperate access to the internet as well, meaning you now exposed to IPs while trying to preven that.
What could be made a case for though, is getting a small server at a more or less trustable hoster, only paying that hoster via an anonymous currency and running Whonix (WS and GW) on that while accessing that configuration via Whonix on your computer. Hosting a hidden service over this configuration would also hardly ever be able to expose your real IP in any scenario. However that design could in turn also have problems as mathematically speaking and backed up by quite a lot of research, it has been prooven that using more than the three relays used by Tor usually actually increases the chance for having a malicious one at the start and end, as explained here: https://www.torproject.org/docs/faq.html.en#Proxychains
The question seems to be: If my Whonix Gateway is compromised, would it be better to have an additional anonymity layer between me and the Gateway? The answer is obviously, of course! You are beginning with the assumption that the key component of your anonymity strategy has failed.
The questions that aren’t being asked are:
Is a remote Whonix-Gateway easier to compromise than a local one? The attacker has the same methods available that he used to compromise your local machine + attacker may now have physical access to the Gateway as well, so probably yes.
Is a remote compromised Whonix-Gateway harder to detect than a local one? Yes. Much harder to implement anti-evil maid on a machine not under your physical control.
How damaging can a compromised remote Whonix-Gateway be to my anonymity? As you stated, the Gateway won’t have direct access to your IP. However, additional attacks become available since the attacker can see your non-encrypted traffic, manipulate your encrypted traffic, perform timing attacks, etc… Additionally, given #2 above, you most likely won’t know that it’s compromised.
Will my i2p Gateway be harder to compromise than my Tor Gateway? Open question, difficult to answer since i2p probably hasn’t had the same amount of scrutiny as Tor.
Not so relevant to OP but probably not good idea to use workstation & server (hidden service) interchangeably. Hidden services have different characteristics from Tor clients that have made them susceptible to unique attacks in the past.
For example, services that are reachable through Tor hidden services and the public Internet are susceptible to correlation attacks and thus not perfectly hidden. Other pitfalls include misconfigured services (e.g. identifying information included by default in web server error responses), uptime and downtime statistics, intersection attacks, and user error.
The CMU attack was fundamentally a “guard node” attack; guard nodes are the first hop of a Tor circuit and hence the only part of the network that can see the real IP address of a hidden service. Last July we fixed the attack vector that CMU was using (it was called the RELAY_EARLY confirmation attack) and since then we’ve been divising improved designs for guard node security.
If you’re interested in running a hidden service:
Do you think it’s safe to run an onion service?
It depends on your adversary. I think onion services provide adequate security against most real life adversaries.
However, if a serious and highly motivated adversary were after me, I would not rely solely on the security of onion services. If your adversary can wiretap the whole Western Internet, or has a million dollar budget, and you only depend on hidden services for your anonymity then you should probably up your game. You can add more layers of anonymity by buying the servers you host your hidden service on anonymously (e.g. with bitcoin) so that even if they deanonymize you, they can’t get your identity from the server. Also studying and actually understanding the Tor protocol and its threat model is essential practice if you are defending against motivated adversaries.
“Given enough time” is the entire basis for public-key cryptography. “Given enough time”, all encryption is moot! (Maybe. PQCrypto)
Thank you, I wish I had phrased it like that myself!
I think I read about something along these lines thank you yes, this is the assumption, I was not clear
Given the range of devices that can be used as a gateway and the range of places they could be deployed - but let’s assume simply a small PC contained within an anonymous datacentre - I’m no expert but a custom case containing a very sensitive rocker switch, ready to throw 12v through thin enamel coated wire, wrapped around matches next to the RAM and restart the heavily encrypted machine. Or something like that. You would then fail across to the second gateway, previously prepared at a second physical location.
Obviously this would be pretty bad news - but I’m looking to keep the web-facing gateway apart from the data - if correctly administered, perhaps providing warning, time and the ability to fail across to another gateway.
Cannot argue with this. Interested in keeping that from being a self-fulfilling prophecy
I won’t take your assumption of my admin skills personally … but perhaps it’s time to migrate my whonix setup away from Windows NT4 …
that was one of them, thank you, realised too late I had stupidly paraphrased a document i didn’t immediately have access to.
True, but currently I think that time is at least a year or two My concern is that it’s not just about the maths - this is assuming an adversary that is willing to wholesale switch off the internet access of physical areas of its own country in order to narrow down suspects - and that’s a more polite tactic.
Qubes - very impressive software, keenly followed it, we don’t have a desktop without it installed as the only OS. Is it the best for servers though?[quote=“Ego, post:4, topic:2380”]
they’d both need additional connections to I2P to access each other effectivley increasing the amount of IPs exposed in case physical access is gained
True of the GW, but the WS would still only be connected with one connection. Not sure we’re increasing the number of exposed IPs given the length of i2p public keys
Are you sure, not possible?
Entropy [quote=“entr0py, post:5, topic:2380”]
If your adversary can wiretap the whole Western Internet
I don’t think I’ve been clear - I’m referring to various NS … GC … sort of operations that lie, cheat and steal. And worse but let’s keep it above the belt.[quote=“Ego, post:4, topic:2380”]
You can’t talk about a 1:1 possibility and then about physical access, because those two things contradict each other.
The 1:1 probability leads to physical access. I’m looking to limit the physical access to the GW.
This is sort of where I’m leaning, physical access to a locked cage within a more or less trustable datacentre
So you are talking about the silkroad attacks? In the first post you wrote:
Those news would be appreciated, as I and the people I know who keep an eye on Tor/hidden service exploits are unable to find these. Without these, hypothesizing whether something will make a hidden service safer or not has no point.
I am aware of the fact that with Tor, some things which might seem intuitively right are actually wrong/unsafe. For example, a lot of people (literally daily) propose that Tor should start to use more than three relays, as the more connections there are, the less likely it is to get attacked, right? Well, while intuition might suggest this, the reality is a different one (https://blog.torproject.org/category/tags/entry-guards).
The same very likely is the case here, though I can’t fully say that if you are unable to tell me where that piece of information that started this came from. If you could do that, I’d be very thankful.
If however, this is just about silkroad, which you wrote about in your last post to define deanonymization, I have to disappoint or rather reassure you. Both versions of silkroad weren’t exposed because of the NSA and other government agency capabilities to “crack”, as people liked to call it back then, the Tor network, but rather because the owners of these sites didn’t follow what at the time was already considered necessary security precautions. They were taken down by “inside-operations” not based on holes in Tor but rather in their recruitment. The people running these platforms fell victim to simple social engineering, coupled with agencies working together by monitoring packages sent out by suspects to perform arrests. This was the case both with silkroad and silkroad 2.0.
1.) Using a more recent version of the Tor Browser Bundle which had long upgraded to a version without this issue.
2.) Having NoScript activated.
3.) Using Whonix which prevents IP-Leaks all together even if you aren’t using the most recent TBB (although you should still always update).
So in conclusion, more information on the weakness mentioned by you would be nice (an article or scientific paper maybe) to evaluate whether your ideas would really somehow prevent this threat (or actually weaken security as your proposal for the time being would do unless it protects from something specific), whether it is an issue better addressed by the “core Tor team”, whether this has been already patched or whether as in the case of silkroad this has been something no software solution in the world could prevent. As far as I can tell, in case of the silkroad attacks (where the whole hidden service was hosted somewhere far away from the people controlling it and wasn’t actually exposed before any arrests took place) your idea wouldn’t have made a difference, it would have mainly increased loads and the needed infrastructure while not providing any significant benefit.