Looking for mirror hosts! - Mirroring instructions updated

[html]

TLDR / Short

Want to mirror Whonix releases?

Updated instructions can be found here:

https://www.whonix.org/wiki/Hosting_a_Whonix_Mirror

Full Story

At the moment we’re still using sourceforge as primary download mirror, because there is a problem with mirror.whonix.org and non-https downloads. That is, for better security, we asked to get whole whonix.org to be added to HSTS Preload List before we had mirror.whonix.org in mind. Now some browsers rightly attempt to enforce https on mirror.whonix.org, which our mirrors do not support. Changing whonix.org hsts settings would take a long time until it hit major browsers and operating systems (not sure if Debian stable uses a hard coded hsts list).

Therefore soon mirror.whonix.de will become Whonix’s primary download mirror.

Our short/mid term plan is to get a stable http mirror network, getting in touch with lots of mirrors. Our long term plan is getting sslmirror.whonix.org. About the latter idea, you can read more here:

[liberationtech] software download over SSL mirrors?

[/html]

I’m wondering if it is a worthwhile goal to implement SSL of downloads since it can give a false sense of security. It does not protect against a global adversary who can gain control of the server, and it might make people less likely to verify the signature of a download because it came via SSL.

Instead, manual signature verification of a download should (to my thinking) defeat even a malicious server since, hopefully, the key has been added to the user’s keyring by another channel, and even if an adversary controlled both the Whonix download server, all mirrors AND the website (that has a link to Patrick’s key), we might reasonably expect Patrick to occasionally verify that the downloadable key is correct, and alert everyone if it wasn’t.

What about, instead, having an automated signature verification of some kind, so even the laziest of users will implement it?

Sorry for late answer. Missed this one.

I'm wondering if it is a worthwhile goal to implement SSL of downloads since it can give a false sense of security. It does not protect against a global adversary who can gain control of the server, and it might make people less likely to verify the signature of a download because it came via SSL.

Yes, may seem like an oxymoron, but I think there is value in it. Even if it only defeats weaker adversaries, that’s something. For more discussion, see:
https://github.com/Whonix/Whonix/issues/177

the key has been added to the user's keyring by another channel,
What channel could that be? It's difficult for the Whonix project to solve issues that most other projects have to struggle with also. There are a lot gaps in the security ecosystem.
What about, instead, having an automated signature verification of some kind, so even the laziest of users will implement it?
It's a (Whonx-) project like sized task to tackle this issue, looks like.

The would be either:

Or more generally, a much missing feature in the security world - metalinks with gpg support:
https://github.com/Whonix/Whonix/issues/21

Yes, may seem like an oxymoron, but I think there is value in it. Even if it only defeats weaker adversaries, that’s something. For more discussion, see:
https://github.com/Whonix/Whonix/issues/177[/quote]

I think it gives a false sense of security, if the certificate is the usual CA type, because I consider it very likely that the NSA has copies of any such certificates and can do a man-in-the-middle attack on such connections whenever they want to. But people see the green bar at the top of their browser and so consider the connection safe, and therefore take no other precautions. Signature verification is harder to do but has to be done anyway, and it shuts out any attacker, global or otherwise.

I guess it also depends upon ones threat model: I myself am not much concerned about being attacked by the Russians or Chinese, and anyway, we cannot assume that too have not obtained copies of all the CA certificates. They might also have agents inside western, server hosting companies. That just leaves, in my estimation, private crooks who seek to break into computer systems for monetary gain. SSL/TLS might keep that last group out, with a reasonable degree of confidence, but if one does a signature verification of a download then it’s moot.

[quote]the key has been added to the user's keyring by another channel,[/quote] What channel could that be?

A different web server, at a different provider, perhaps in a different state jurisdiction. I note that the images are on whonix.de while your public key is on whonix.org - I don’t know where they are physically but there’s no reason to have them in the same place. I’m just saying that one has great leeway in diversifying them. Since keys and signatures are such small files then they might easily be hosted on a home or office server with secure physical access.

I note also that there are torrents for the images, a very different channel from the key. The torrent protocol has its own secure (?) data-checking function built-in. As long as there is a fair crowd of file-sharers (I have no idea how many) on the torrent, I feel that it is reliably resistant to a MITM attack. In my own planning, I consider that model safe (i.e., a torrent plus a web-hosted key) especially if one does occasional downloads of the torrent from random locations and compares them with a master copy.

[quote] What about, instead, having an automated signature verification of some kind, so even the laziest of users will implement it?[/quote] It's a (Whonx-) project like sized task to tackle this issue, looks like.

The would be either:

Or more generally, a much missing feature in the security world - metalinks with gpg support:
https://github.com/Whonix/Whonix/issues/21

Thank you for those references. I was thinking of something like the Debian secure download process. I’m not sure exactly how the Debian public key normally gets to the user but it might be already be on the ISO image that one starts with, which in turn one is supposed to verified against the signature supplied on the same website as the images.

EDIT: The objection might be raised that a torrent downloader is vulnerable to deanonymization to an adversary who can access server logs because they are exposing their IP addresses, and that if instead one downloads from the website over a Tor connection then such deanonymization is less likely (but can’t be ruled out, because of traffic correlation). And running a torrent over Tor is frowned upon. OK. Concealment of file-sharers’ IP addresses by means other than Tor is what the Phantom anonymity protocol addresses. It will be a tool for many people outside of file-sharing, and will indeed be a different channel for securely passing keys, at least, and perhaps for large files as well.

A different web server, at a different provider, perhaps in a different state jurisdiction. I note that the images are on whonix.de while your public key is on whonix.org - I don't know where they are physically but there's no reason to have them in the same place. I'm just saying that one has great leeway in diversifying them. Since keys and signatures are such small files then they might easily be hosted on a home or office server with secure physical access.
And how would one learn about this? An adversary in position to put a malicious key on whonix.org could just say for certain users "use that key, download from malicious resource xyz".
I note also that there are torrents for the images, a very different channel from the key. The torrent protocol has its own secure (?) data-checking function built-in. As long as there is a fair crowd of file-sharers (I have no idea how many) on the torrent, I feel that it is reliably resistant to a MITM attack. In my own planning, I consider that model safe (i.e., a torrent plus a web-hosted key) especially if one does occasional downloads of the torrent from random locations and compares them with a master copy.
An adversary in position to modify the torrent file could lead you to a fake torrent network sharing the malicious file.

I don’t think there are any easy solution to the hard problem of the web of trust.

Thank you for those references. I was thinking of something like the Debian secure download process. I'm not sure exactly how the Debian public key normally gets to the user but it might be already be on the ISO image that one starts with, which in turn one is supposed to verified against the signature supplied on the same website as the images.
It's on iso by default. But how does on get a trusted Debian iso to start with? How does one get the key to verify the iso? At best also only from some https protected page. There is only https://ftp-master.debian.org/keys.html and the ssl of that page looks somewhat broken.

See also:

You’re back with meeting some Debian developers somewhere. Believe that they’re really Debian developers. Make a leap of faith. Get their key fingerprints. After trusting their keys, verify the iso signing key or ask them to tell you what they think the iso signing key is.

To do that, the adversary has to identify the certain users. If users/downloaders do even minimal obfuscation of their IP, such that they can’t be identified on-the-fly but only, perhaps, later by analysis, then they get the good file. Once I start downloading a file it’s probably too late for an MITM to tamper with it without being noticed. The only remedy for that problem on the part of the adversary would be for him to redirect users regardless of whether he is certain of their identity, which produces misidentifications and the users other those those certain ones noticing that something is wrong. In particular, the uploader of the file, who may (should be!) also be a downloader, is likely to get hit in which case the adversary is revealed.

An adversary in position to modify the torrent file could lead you to a fake torrent network sharing the malicious file.

By selectively giving a bad torrent file to certain users? Then he needs to identify those particular users before they download the torrent file (see above).

I don't think there are any easy solution to the hard problem of the web of trust.

I think our problem is easier than the adversary’s. We can close the web of trust so that it is a loop: If I upload a file to a crowd and then later, as an anonymous member of that crowd, download a copy of the file and verify that it is the same file. I trust me, the person that that chain starts and ends with, so the problem becomes the easier one of making that crowd so anonymous that it becomes infeasible for the adversary to introduce a corrupt file while also preventing me from seeing it, when he can’t tell one member of the crowd from another.

You're back with meeting some Debian developers somewhere. Believe that they're really Debian developers. Make a leap of faith. Get their key fingerprints. After trusting their keys, verify the iso signing key or ask them to tell you what they think the iso signing key is.

Yes, if it’s a one-way trust. If it is a closed loop then it is solved. I alluded to that in my other post here: Whonix Forum In the clip, Linus claims never to keep backups, he is confident that he can always clone the master branch of two or three of his team members, compare them to each other (after placing the head of the clones at the commit whose hash he has written down somewhere) and he is done. Actually, what I’m describing above is safer than that because the master copy is still in the hands of the uploader - it isn’t a restore from backup.

To do that, the adversary has to identify the certain users. If users/downloaders do even minimal obfuscation of their IP, such that they can't be identified on-the-fly but only, perhaps, later by analysis, then they get the good file.
But then why bother uploading the signing key to third party servers? In that case, they'd also have the legit key.
Once I start downloading a file it's probably too late for an MITM to tamper with it without being noticed.
I don't think that's hard. It's "just" a conditional if/then during the download.

But then why bother uploading the signing key to third party servers? In that case, they’d also have the legit key.[/quote]

I’m not sure what you mean.

What I was referring by the downloader getting a good file was that one type of attack - a MITM serving a poisoned file to certain (i.e., identifiable) users, and only those users, in real time - would be very difficult*.

  • I’m reluctant to say anything is impossible, and “very difficult” means, in practical terms, “very expensive.”
[quote]Once I start downloading a file it's probably too late for an MITM to tamper with it without being noticed.[/quote] I don't think that's hard. It's "just" a conditional if/then during the download.

I was thinking of the CRCs that are typically embedded in any file, file archive, and the blocks of a download process. Each layer typically has at least one such checksum. They have to be recalculated on-the-fly (at most before the user has finished downloading his file) so as to be mathematically consistent with the original (the very same CRC would need to be generated for the poisoned file as the CRC of the original file if the user has already downloaded the block containing the CRC). If he fails to do that, the download aborts/restarts, the archive won’t open, etc. My understanding is that there’s no here-to-there way to do this, but that it has to be done by a try-and-see working through the space of possible CRCs (adjusting/padding the poisoned file on each iteration) until one finds the desired CRC. All before the user has downloaded the blocks that are poisoned, not even the whole file.

I won’t say that’s impossible but I guess that it would take many cores to spring into action in an instant, and flawless software to run it. It is all happening too fast for human supervision. I.e., very, very expensive.

The NSA has a mandate (I read somewhere) to gather as much information as possible at the least cost. I don’t see at this point in my understanding how they are going to throw such vast amounts of computer (and programmer) resources at the task of pwning one little enemy-of-the-state, when there are about 1 million Tor users and 7 million bittorrent users.

EDIT: …provided the user doesn’t have a predictable IP address. If the IP address that he will be using to do the download is known beforehand then I supposed the adversary can set up his attack ahead of time.

I'm not sure what you mean.

“by another channel” → “A different web server”

[hr]

Maybe you’re not up to date on all the NSA revaluations. Can’t complain if you’re not. Really. I don’t know everything about it myself. There are simply too much NSA programs. I need some kind of summary of all the stories that were released and programs they run.

They got Quantum Insert and Tailored Access Operations Unit. >50.000 (?) staff. Budget >10 Mrd. USD.

Snowdens info is outdated by now. But I suppose it only got worse. And perhaps it was just the tip of the iceberg that he leaked and that has been reported yet.

Check out Jacob Appelbaums’s last talk:

If it’s possible - they’re very likely doing it. My point: I think what you’re talking about here is easy for them.

Hmm, there are too may uncertainties for me to continue this line of discussion. My default mindset is simply to do my best, which is all one can do.

Do you think they have backdoored Whonix, for certain users?

I couldn’t resist one more comment. :slight_smile:

If a clearnet website also has an onion address, it might be an insurmountable problem for the NSA to serve up fake pages without being detected. My feeling is that if it is not possible to eliminate the possibility of the NSA serving fake webpages to certain visitors to some arbitary (clearnet) website, the website owner can at least arrange matters so that such substitution is (nearly) impossible to conceal, which would satisfy me.

Imagine that I publish a .torrent file, for an ISO, say, on my clearnet website as well as on my onion address (same webserver!), and that the actual box is in my personal possession behind multiple firewalls. Users visit both addresses (same site!) and a torrent swarm builds up. I look around and, according to the DHT and other methods of finding torrents, there is only the one swarm for my ISO (which, of course, I got going before publishing the torrent file).

If there was a second swarm purporting to be for the same name of file but I found that it had a different signature (!) then I would scream about it on my website, of course.

But who has ever heard of such a thing actually happening?

From what is currently known about attacks on Tor users and hidden sites, it is reasonable to conclude that the NSA cannot subvert the scenario that I describe above without detection. Of course, one might argue that if they had been successful in hiding such activity then we wouldn’t hear about. But the problem with that idea is the closing of the loop that I mentioned: the uploader can compare his upload with a later download. If they are the same, then the torrent is clean, due to its self-checking robustness and the diverse nature of a swarm

If they HAVE been detected in such a scenario then I’d be interested to hear about it.

No idea. I got not special information on that. It’s good to keep such threat models in mind in any case.

If a clearnet website also has an onion address, it might be an insurmountable problem for the NSA to serve up fake pages without being detected.
If those are running on the same server, they can simply replace them on the server. Then the same version will be served over https and onion.

See also Download Security - Kicksecure about were servers are hosted and the feasibility to host them at home. Maybe self hosting is something worth researching. I haven’t heard of any bigger project that is self hosted at home. Might be possible to get sufficient upload and static IP. Would be also advise able to have a separate IP for the server.

Just did a quick lookup at prices. You need a business contract and be honest what you’re up to. Otherwise they can terminate the contract and/or throttle the connection.

Would need something greater than (or at very least) 2048 kBit/s for upload in order to not make the server a bottleneck.

We’re at ~ 2.58 Mbit/s max. Perhaps ~ 1.5 Mbit/s average. It will likely increase in future when we install varnish to speed up the wiki.

For DSL, I don’t know if it would be advisable to use anything besides Deutsche Telekom for stability reasons. To make it protected by freedom of expression, let’s say it safely, in case of failure, other providers having difficulties fixing it, because the quasi monopolist Deutsche Telekom let’s say isn’t too eager to cooperate I heard.

According to that page, their best offer is:
T-DSL Business 2000 symm.
415,31 EUR / setup cost
165,41 EUR / monthly

There is also cable. And availability matters of course. Plenty of stuff to research. But looks like it’s out of my price class anyhow.

Some day it might make sense. But then again. Why should I assume, NSA and friends haven’t hacked into the server using zero day or physical access when I was away?

Not that unlikely?

Criminal example:
Police cannot fix the magically mess that went wrong during upbringing.

Whonix example:
I cannot fix cannot work around failed societies, illegitimate states, mass surveillance, secret agencies with billions of funding, thousands of staff, indifference by the people and so forth with easy software / physical security.

What seems more worthwhile to work on than self hosting at home is verifiable builds:

Help welcome:
https://phabricator.whonix.org/tag/verifiable_builds/

Thank you for taking the time to reply in full to my comments.

From what you say, it seems that Germany is not typical.

I’ve had a fixed IP address on my business-class ADSL connection for ten years or so, for a total of about 45 Euro per month, 100GB limit (most of the charge is the broadband, with a small surcharge for the fixed IP).

I’ve read my ADSL provider’s terms of service and it contains no prohibition of running a web server. And if it’s a small traffic one then it should be no problem.

The upload speed is currently 300-400 Kbps (typical for ADSL) but I’m expecting FTTC (fiber-to-the-cabinet) to get to my exchange in the next 12 months, which has a promised (minimum!) upload speed of 2 Mbps.

But in any case, my plan is to host only .torrent files and signatures there, all small files, with bulk files such as ISO images being distributed by bit torrent. A web site, for information and discussion, will be on a regular hosting service. The website will have links to the torrent and signature files on my home-office computer. The environment is secure, in various ways, and I can’t be threatened.

EDIT: In view of my description of verifying the integrity of ones own torrent-distributed ISO files and their signatures by downloading them anonymously and checking the down loads, it might seem superfluous to host the .torrent files and signatures locally. After all, surely a man-in-the-middle attack can be made whenever a visitor downloads such a (torrent or signature) file no matter where it is hosted. It’s the “closing of the loop” that ensures their integrity, and the trust of the users that I would do such checks. OK, but since it would cost me nothing extra to host them locally, i feel that it is worth it in order to add another layer of security.

EDIT1: I too have looked into reproducible builds and it’s one of a growing list of “to do” tasks. I’m a perfectionist who is aware of how short life is, so I’ll aim to do a Microsoft: get something out there that kind of works even if rickety, insecure and crash prone, but builds a userbase, and (unlike Microsoft) plaster it with warnings about it only being experimental and not to be trusted, then plead for help from more competent people. :slight_smile: