Onionizing Qubes-Whonix Repositories

Free disk space and traffic, yes.

Technical implementation is not super simple. Would might require support from Qubes. They providing rsync access or so. Then we could add a brain dead script that downloads from yum.qubes-os.org to whonix.org server over unencrypted connection (sorry, that is how rsync and mirroring works nowadays still). If domain name is supposed to include qubes-os.org, then further help from Qubes would be required (they modify DNS, point to whonix.org server) and I’d likely could use fortasse’s help for that one whonix.org server side.

1 Like

:unamused:

@marmarek, what do you think?

1 Like

ohnoes… SPIES!!!

There is already rsync service on ftp.qubes-os.org (which is the same as yum.qubes-os.org). Exactly for this purpose.

Repository metadata is authenticated anyway, so it shouldn’t be a problem.

I don’t understand. If that’s about above mentioned onion service, it shouldn’t have anything to do with qubes-os.org nor whonix.org domain, no?

One more possible problem - managing sources.list. Onion links needs to be placed there, but the file currently is part of qubes-core-agent package, which is generic package also for non-Whonix.

1 Like

How often could the rsync script run? (Keeping the time the mirror lags
behind low and not flooding Qubes server.)

marmarek:

[quote=“Patrick, post:13, topic:3221”] Technical implementation is
not super simple. Would might require support from Qubes. They
providing rsync access or so. [/quote]

There is already rsync service on ftp.qubes-os.org (which is the same
as yum.qubes-os.org). Exactly for this purpose.

Great!

[quote=“Patrick, post:13, topic:3221”] Then we could add a brain dead
script that downloads from yum.qubes-os.org to whonix.org server over
unencrypted connection (sorry, that is how rsync and mirroring works
nowadays still). [/quote]

Repository metadata is authenticated anyway, so it shouldn’t be a
problem.

Not a blocker, but here is why I brought that up:
Yes, repository metadata authenticated. But with rsync we are taking
something from a “somewhat secure” source (https), download it over an
insecure unencrypted rsync transfer. It would be bad if during that
unencrypted transfer a mitm introduced a malicious modification that
later exploits the metadata verification code in dnf.

So I think very long term, an encrypted/authenticated replacement for
rsync is desirable. [No such project exists yet to my knowledge.]
Ideally, packages were uploaded over a secure connection and then
downloaded by the user through an onion service. Then there are fewer
chances for a mitm to try exploit the metadata verification code. (Only
on the upload then server side then.)

[quote=“Patrick, post:13, topic:3221”] If domain name is
supposed to include qubes-os.org, then further help from Qubes would
be required (they modify DNS, point to whonix.org server) and I’d
likely could use fortasse’s help for that one whonix.org server
side. [/quote]

I don’t understand. If that’s about above mentioned onion service, it
shouldn’t have anything to do with qubes-os.org nor whonix.org
domain, no?

Files then would be available through

If that sounds alright, then there is no issue.

One more possible problem - managing sources.list. Onion links needs
to be placed there, but the file currently is part of
qubes-core-agent package, which is generic package also for
non-Whonix.

That could be considered a follow up task. For the context of this
Hardening Qubes[-Whonix] thread (which would document how to change
this) it is not a blocker.

Would it be cleaner to add another hidden service using http://qubesosmamapaxpa.onion/ or a newly generated http://qubesosxxxxxxxxx.onion?

1 Like

Maybe. But also more work. And depends on if it should be considered a
mirror or official location.

How often could the rsync script run? (Keeping the time the mirror lags
behind low and not flooding Qubes server.)

Generally there is not much load there, but better not more often than
once an hour.

Not a blocker, but here is why I brought that up:
Yes, repository metadata authenticated. But with rsync we are taking
something from a “somewhat secure” source (https), download it over an
insecure unencrypted rsync transfer. It would be bad if during that
unencrypted transfer a mitm introduced a malicious modification that
later exploits the metadata verification code in dnf.

I think that’s wrong assumption. Better assume server is always
compromised. HTTPS and/or tor is only for privacy (ISP can’t see what
updates are you downloading), but not for updates/metadata integrity.

Files then would be available through

If that sounds alright, then there is no issue.

I think that’s ok. If there is no conflict on directory names.

Additional benefits of doing updates via Tor, from a recent Tor blog post on apt-transport-tor:

Doing updates via Tor provides some really compelling security properties. One of the big benefits is that an attacker can’t target you for a malicious update: even if they manage to steal some package signing keys and break into the update server and replace packages, they still can’t tell that it’s you asking for the update. So if they want to be sure you get the malicious update, they’re forced to attack everybody who updates, which means a really noisy attack that is much more likely to get noticed. This anonymity goal is one of the main reasons that Tor Browser does its updates over Tor as well.

Yes, that’s right too. But it still does not depend on data integrity on
the server itself. Rather, makes it harder for attacker to provide
different data to selected users - so replaced packages can be spotted
even easier.

Right. I was replying to the “only for privacy” part. I just thought it
was worth remembering (reminding myself, really) that there is a
security benefit to doing updates over Tor in addition to a privacy benefit.

Good you say so, because I had in mind to only wait 1 to 3 minutes between checks.

Agreed.

“Better assume server is always compromised.” is a very good assumption.

Unencrypted/unauthenticated rsync also opens up for “trolling attacks”. What in case kkkkkkkkkk63ava6.onion ever delivers a malicious file? Then the investigation would have to cover:

    1. developer machine compromised?
    1. upload between developer machine to Qubes server compromised? (ssh has its weaknesses…)
    1. unauthenticated rsync between Qubes and Whonix server compromised?
    1. Whonix server compromised?

That would cost a lot time to figure out. If there was an authenticated rsync alternative, I would be very much for it. Alternatively, we if could check the integrity on the server before delivering it to users and having them notice a signature is broken would also be better. (avoiding lots of support requests and concerns) I guess it falls into the “very far in the future” milestone.

An option for pulling files in an authenticated manner is something like rsync over rssh. I’m not sure what the implementation difficulty would be on the Qubes side, it may not be worth the effort.

1 Like

Our wiki instructions to add “tor+” in front of the onion addresses for repositories is wrong, because you get a bunch of these error messages:

W: Failed to fetch tor+http://sgvtcaew4bxjd7ln.onion/dists/jessie/updates/non-free/binary-amd64/Packages Failed to connect to localhost port 9050: Connection refused

E: Some index files failed to download. They have been ignored, or old ones used instead.

Just pointing to the .onion mirrors without “tor+” works fine. So what’s the correct method?

1 Like

Making tor+ working in Whonix 13 is not simple.

Any invocation of apt-get - in essence - results in torsocks apt-get for stream isolation. I.e. apt-get gets forced to use a Tor SocksPort. This conflicts with apt-transport-tor which wants to connect to localhost 127.0.0.1 9050, which gets blocked by torsocks. (torsocks by default blocks that, because when not using Whonix, this is better blocked to avoid leaks.)

Running apt-get.anondist-orig would work for tor+, but that would break non-onion repository access on Whonix-Gateway, because that does not have system DNS.

tor+ needs to wait for Whonix 14, because that will be based on Debian stretch, which comes with a recent enough version of torsocks supporting AllowOutboundLocalhost 1 which will be enabled by the uwt package.

# Set Torsocks to allow outbound connections to the loopback interface.
# If set to 1, connect() will be allowed to be used to the loopback interface
# bypassing Tor. If set to 2, in addition to TCP connect(), UDP operations to
# the loopback interface will also be allowed, bypassing Tor. This option
# should not be used by most users. (Default: 0)
AllowOutboundLocalhost 1
1 Like

There is deb.sik5nlgfc5qylnnsr57qrbm64zbdx6t4lreyhpon3ychmxmiem7tioad.onion - should be fine to use? @fortasse

To move this forward… To fully onionize all default repositories that Whonix is using for Whonix 15…

Could someone work please on

? Should be a rather simple task.

That would be a prerequisite to get this implemented in Whonix.

Yes, that onion address should be fine.

1 Like

Ticket created.
use onion sources list for apt-get updating by default
https://phabricator.whonix.org/T812


Status:

Due to flakiness of onion v4 this should be postponed.

A fix has been implemented and is expected to arrive in Tor 0.3.5 in December.


While we are at it… Should we separate Whonix webiste onion and Whonix repository onion? We would keep Whonix website onion as is (even functional for apt-get to not break it for anyone) but the next upgrade of Whonix wuould move everyone else to a another, fresh onion address.

Why?

  • separate website and apt-get downloads
  • future-proof with respect to future server load
  • we could move apt-get downloads to a different server when needed

Should we even create 2-10 or even 10-100 different onion domains and randomly assign Whonix users one? Why: load balancing. Why not: probably overkill. Debian manages without such gymnastics. But they don’t have onion v3 yet as far as I know.

Perhaps we’ll wait for onionbalance v3 support?

Research on this is low priority due to above status.

//cc @mig5

1 Like

Given APT is simple HTTP I think you could solve most load issues by letting Varnish cache the repo data. (We could purge the cache when pushing new data to the repo). Or yes, could look at onionbalance.

So I think a single APT server is probably sufficient (it has been sufficient to date, right?) but I also agree in the rule of not putting all eggs in one basket. For example, long term I would advocate having a separate server for website/wiki/forum and one for apt repo, also for security/attack surface reasons (no need for a hack of MediaWiki to place the apt repo at risk). It’s the same ‘compartmentalisation by design’ principle that we appreciate of Whonix and QubesOS. The only downside is commercial impact (more servers) and a little extra server maintenance (more of my time).

Random onion domains is probably overkill/administrative overhead indeed (would rather use onionbalance when it’s available for v3). It would also be then harder for users to confirm with one another that they’re using a legitimate APT source, e.g with 100 different onion domains, hard to ‘google it’ and confirm you’re not using a malicious onion address (hard enough already to read just one long v3 onion address and recognise it as legitimate, Zooko’s triangle, etc etc).

3 Likes