Guard against MITM sdwdate

Off-topic: I am interested to move discussions to the next level. Many comments are just questions, that aren’t clear in the original proposal. Or something is missing the the original proposal, that can be fixed. So I was wondering, if one (me) could make some comments, and then you improve the original proposal. If these comments are covered and no dissent, the previous discussion can be deleted.

On-topic:

I haven’t fully through through it yet.

-Fetching consensus data is one over Tor, ideally this can be done multiple times over different exits and the results compared as to stop an exit's ISP from doing the attack described above.

I think there is a false assumption about mitm’s here.

user <-> Tor network <-> Tor exit <-> ISP of Tor exit <-> ISP of destination server <-> destination server

If you keep changing Tor exit’s, you make a mitm only less likely for a fraction of that route. Marking the by:
|

user <-> Tor network <-> Tor exit <-> ISP of Tor exit | ISP of destination server <-> destination server

So.

mitm less likely | mitm equally likely

I am not sure you were aware of this at time of writing or if that would change the proposal?

By the way, don’t trash sdwdate too fast. When we implement a host operating system or Whonix host additions (https://github.com/Whonix/Whonix/issues/39), we’re back at the question on how to safely replace unauthenticated NTP. Not sure how much this changes your proposal.

[hr]

Set clock from verified Tor consensus only or also from unverified Tor consensus?

When only using verified Tor consensus, there is only a limited amount of replay attacks a mitm could do. Because only a limited amount of Tor consensus that have ever been signed.

How many different verified Tor consensus documents has a mitm at it’s disposal? It’s renewed every few hours and replaying is possible for months. And each consensus contains different date/time. I am wondering what would happen if a mitm in position of the directory authority’s ISP would make a different replay attack (sending a different version of Tor consensus) for each and every request. If that could be used to single out users somehow.

Using unverified Tor consensus would make this worse. An adversary could come up with an unlimited amount of Tor consensus date/time combinations.

[hr]

- Let's assume that the host's clock falls within the accurate range that allows Tor to connect. And also that (Linux) users are following best protocol; by disabling timestamps on the host and not running anything on there. Because really if they are not doing this they are defeating a large part of the protection a hypervisor is about.
Missing here: users also do run non-torified applications such as browsers with javascript on the host, which leaks the clock.

Also this does not take into account “security discussion” from Tails Time synching design:
https://tails.boum.org/contribute/design/Time_syncing/#index1h2

In essence, without sdwdate, both bridge/non-bridge users could be tricked into using an up to ~7 days old Tor consensus.

Off-topic: I am interested to move discussions to the next level. Many comments are just questions, that aren't clear in the original proposal. Or something is missing the the original proposal, that can be fixed. So I was wondering, if one (me) could make some comments, and then you improve the original proposal. If these comments are covered and no dissent, the previous discussion can be deleted.

Thank you for being open-minded about this and willing to hear more. The only reason I’ve reopened this discussion is I’m confident that there is something new on this. I will re-post the last conclusion i reached and start speaking from that position:

Actually you know what I think that anondate makes sense only on the workstation. the gateway doesn't need sdwdate/anondate. Sorry for the confusion, I'm sharing my thoughts as they develop. There are two main points being discussed.
anondate (can be safely used as the only trusted source) making it a great backstop in the event there is something on the workstation leaking timestamps and also on the host doing that as well. The security pitfalls of using it in the previous design, are avoided by rethinking how and on which vm its used on.

The gateway isn't leaking anything by using only the host's time to bootstrap off of as recommended. No danger in using the Host's time. If we decide to allow it to read the host's wall clock, then no gateway restart will even be needed because time is always synced and not allowed to drift. anondate or any form of timesyncing would not be needed on the gateway anymore. And the only place where it being used (the workstation) will have a trusted source with fresh data being fetched at all times with none of the fingerprinting or stale data attack implications that sending requests in the clear entailed: like the gateway design problems you pointed out before.</blockquote>

user <-> Tor network <-> Tor exit <-> ISP of Tor exit <-> ISP of destination server <-> destination server
If you keep changing Tor exit's, you make a mitm only less likely for a fraction of that route. Marking the by:

Alright so in the situation that anondate is depended on for use in just the workstation, the damage the attacker can do by feeding outdated consensus is limited to getting a false time. It would not affect or deny service to the gateway in any form.

As in the current model of the sdwdate, getting a consensus on the consensus data (pardon the pun) curbs the damage an attacker does, because a certain time interval is not used unless, say, some number of requests X out of Y total to a Directory Authority agrees.

Come to think of it, these traffic requests by anondate are actually not distinguishable any more from any Tor client out there.

The fact that the gateway is using accurate host time, means that no problems with connections to the Hidden Service server will happen.

By the way, don't trash sdwdate too fast. When we implement a host operating system or Whonix host additions (https://github.com/Whonix/Whonix/issues/39), we're back at the question on how to safely replace unauthenticated NTP. Not sure how much this changes your proposal.

I’m sorry if I had given you that impression. It is not my intention to trash it and given the amount of effort you’ve taken in designing and maintaining it, I should have made that more clear in my comments. I am examining the current Whonix design in light of recent developments (for example disabled Timestamps) and what they allow us to improve.

Set clock from verified Tor consensus only or also from unverified Tor consensus?

When only using verified Tor consensus, there is only a limited amount of replay attacks a mitm could do. Because only a limited amount of Tor consensus that have ever been signed.

Verified only, there is no reason to do otherwise.

How many different verified Tor consensus documents has a mitm at it's disposal? It's renewed every few hours and replaying is possible for months. And each consensus contains different date/time. I am wondering what would happen if a mitm in position of the directory authority's ISP would make a different replay attack (sending a different version of Tor consensus) for each and every request. If that could be used to single out users somehow.

We can use a sample size of 5-10, or something similar to the number of requests sdwdate makes at the moment. They may differ but there is a certain widely agreed upon range at any given time. We can calibrate against this range and set it somewhere in the middle of the average of all values fetched. The solution to avoiding bad ISPs is to diversify connection to Directory Authorities through different circuits/exit nodes.

For your Whonix Host implementation, its best to ship it with NTP disabled and no time syncing mechanism. No before you dismiss it, here’s why:

  • Computer BIOSs keep track of time across reboots by having their own internal clock. That allows all computers to keep track of time across reboots instead of resetting them.

  • A timesyncing mechanism is not really needed. All we care about is that the host time is set roughly accurate enough (manually if needed) to enable Tor to connect.

  • A time syncing mechanism on the host entails a risk of remote attack if its using anything.

  • anondate on the host is not needed and would probably have more cons than pros: allowing a local ISP-level adversary to influence time.

  • Consider this situation: Whonix Host is being used to communicate with the clearnet and a user runs software with some obscure protocol that leaks time on the Host and in the workstation. here’s what happens if we have Whonix vms setup in a (Whonix host no timesync & NTP disabled, gateway uses kvmclock, workstation uses anondate):

  1. The gateway can connect to Tor with no problems. It doesn’t leak anything and so it doesn’t need a timesyncing mechanism.

  2. Anondate on the workstation prevents a user from shooting themselves in the foot when running software in the manner discussed above as the Guest’s time is set differently than the host’s

  3. From Whonix Host perspective: No denial of service attack on Tor due to an outdated consensus fed, no preplay attacks either. No remote exploit of any timesyncing client whether secure or not.

Also this does not take into account "security discussion" from Tails Time synching design: https://tails.boum.org/contribute/design/Time_syncing/#index1h2

In essence, without sdwdate, both bridge/non-bridge users could be tricked into using an up to ~7 days old Tor consensus.

We are not using anondate on the gateway so I would say this attack is not really relevant to Whonix specifically. A malicious bridge is a problem for any Tor user. This changes the discussion from : setting time on the workstation in a secure manner, to: How do we protect against a malicious bridge?

An interesting problem but another one entirely

[quote]
Also this does not take into account "security discussion" from Tails Time synching design:
https://tails.boum.org/contribute/design/Time_syncing/#index1h2

In essence, without sdwdate, both bridge/non-bridge users could be tricked into using an up to ~7 days old Tor consensus.[/quote]

We are not using anondate on the gateway so I would say this attack is not really relevant to Whonix specifically. A malicious bridge is a problem for any Tor user. This changes the discussion from : setting time on the workstation in a secure manner, to: How do we protect against a malicious bridge?

An interesting problem but another one entirely

Actually this is very interesting problem. if anything, whonix is in a unique position to provide more protection against a malicious bridge than any one level anonymity platform can. Now I would modify the proposal to include anondate on Whonix gateway, not as a way to set time, but as a sanity check for consensus data fetched from a bridge. it would fetch its data over Tor of course.

Actually you know what I think that anondate makes sense only on the workstation.
No sdwdate on workstation -> using anondate -> end up with a Whonix specific web fingerprintint (Tor Browser / javascript leaks it).
the gateway doesn't need sdwdate/anondate.

No danger in using the Host’s time. If we decide to allow it to read the host’s wall clock, then no gateway restart will even be needed because time is always synced and not allowed to drift.


No sdwdate on the gateway and using host time → host time could be forged → could end up using an up to ~7 days old consensus

[hr]

Please move Whonix host time discussion into a separate thread or things get muddled up here.

[hr]

Either I am confused or somehow still not convinced this is a good idea. Maybe we’ll figure out.

But what about this… It can be done in parallel to this discussion…
Whonix sdwdate and Tails htpdate are not much different in this regard.
Neither are Whonix anondate or Tails tordate.
So could you wipe any reference to Whonix and/or VMs from your proposal? Completely forget about Whonix for a moment? Think like a Tails dev? Then propose this to Tails developers?
Before doing some grave design change as this, it would be useful to read their opinion so or so.
And with a Tails specific proposal for improvement, they may think this through as well, so we’re not overlooking something.

Sounds good. I’ll do it later when I have the idea on the best approach for this figured out. Its not quite there yet, but keep an eye on this.

After how much the TAILS devs contributed to us as you mention, I would like to give back something to them.

I was likely wrong about something.

You wrote:

Multiple fetches over different exits are a good idea as a way to guard against malicious exit IPs and to have a consensus on what the consensus should look like.

Tails design:

If not using a bridge: Tails starts without a cached consensus, so its Tor client starts by connecting directly to a directory authority (and not to a directory mirror / entry guard), so feeding you an old consensus requires the attacker either to break SSL, or to control the directory authority your Tor client connects to.

So if Tor when not using a bridge and/or python-stem [as a torified application] uses SSL to download the consensus, replay attacks aren’t possible. And since Tor doesn’t use CA’s, downloading it multiple times seems not justified. What do you think?

So if Tor when not using a bridge and/or python-stem [as a torified application] uses SSL to download the consensus, replay attacks aren't possible. And since Tor doesn't use CA's, downloading it multiple times seems not justified. What do you think?

Yes that is the logical conclusion.

We should probably combine consensus with the more accurate Hidden descriptors and use them instead of CA SSL, on both gateway and workstation.

Edit: corrected page number

We should discourage users from running NTP on the host because at best it leaks the host’s time itself through timestamps (page 3) and at worst its a vehicle for remotely exploiting the client’s code and skewing time which could deny Tor service.

We should probably combine consensus with the more accurate Hidden descriptors and use them instead of CA SSL, on both gateway and workstation.

I will backtrack from that position once more:

-when not using kvmclock on the gateway and the gateway clock incorrect after suspend, the only viable option is to fetch a verified consensus in the clear and set from that because Tor logically wouldn’t be able to connect in this situation.

-By setting sdwate to use only verified consensus we make it safe to fetch the data in the clear meaning that not even replay attacks are possible from the ISP. (SSL and Directory Authority keys make it impossible as you brought up).

-The only problem with this approach is the fingeprintable nature of it, because sdwdate would need to do this on a regular basis if the user suspends their machine.


  • Because its better and safer to disable NTP on the host, setting the gateway to use kvmclock instead and avoiding sdwdate on the gateway should improve things in a way.

tl;dr

A secure but fingerprinted technique to start Tor because we aren’t using kvmclock vs. using kvmclock and disabling NTP - which isn’t so bad because of the potential cons of keeping it running.

This needs research, but it may turn out that Tor clients regularly download consensus data from the Directory authorities.

That would make it an okay approach: set an acceptable time for Tor to connect from verified consensus, then adjust it to me more accurate using HS Descriptors.

Does Tor download consensus data regularly after first use? yes arma: Does using verified consensus mean that replay attacks are not possible? your clock has to be somewhat right. in that case yes. ok ty

It should be safe to go down the anondate + HS route. It wasn’t an unusual reaction that could be fingerprinted after all.

Please keep working on this. But it seems there is quite a lot more to do.

We need some more opinions about how (not) bad it would be to use an up to ~18 hours old Tor consensus. And more.

Roadmap:

  1. Are you still up to do the older Tails-only specific proposal Whonix Forum?
  2. Would it be possible be make this new one (anondate + HS route) a Tails-only specific proposal? Would you be up for this as well?
  3. Can we prepare a post for tor-talk to ask as Whonix-agnostic-as-possible “extract time from Hidden Service Descriptors” question about if https://github.com/Whonix/Whonix/issues/318 could work as imagined and if it would be sane?
  4. Can we prepare a question for tor-talk to ask as Whonix-agnostic-as-possible how (not) bad it would be to use an up to ~18 hours old Tor consensus.?
It wasn't an unusual reaction that could be fingerprinted after all.
It would be useful to understand why Tails' tordate is fingerprintable - because when Tails starts while the clock is too much off, it sets system clock to what tordate thinks, then restarts Tor and re-downloads the consensus.

Off-topic:
I would like to make the scope of one thread as narrow as possible to prevent muddling up too many points and forgetting about something. Opened two topics related to NTP.

1) Are you still up to do the older Tails-only specific proposal https://www.whonix.org/forum/index.php/topic,470.msg3880.html#msg3880? 2) Would it be possible be make this new one (anondate + HS route) a Tails-only specific proposal? Would you be up for this as well? 3) Can we prepare a post for tor-talk to ask as Whonix-agnostic-as-possible "extract time from Hidden Service Descriptors" question about if https://github.com/Whonix/Whonix/issues/318 could work as imagined and if it would be sane? 4) Can we prepare a question for tor-talk to ask as Whonix-agnostic-as-possible how (not) bad it would be to use an up to ~18 hours old Tor consensus.?
  1. No I have improved it considerably after that time.
  2. Sure. This is my plan to have something better that we can both benefit from. I am up for writing a proposal here first which you can vet and I post. I am almost done addressing the few quirks you brought up.
  3. Yes its important this is discussed with them too. I will write something/clean up what I wrote there and ask them.
  4. Yes that too.
It would be useful to understand why Tails' tordate is fingerprintable - because when Tails starts while the clock is too much off, it sets system clock to what tordate thinks, then restarts Tor and re-downloads the consensus.

Right this is a signalling pattern that is being fingerprinted.

Tor’s failed connection attempt because time on the guest is virtually frozen at that moment during a suspend because a guest by design cannot know anything about host power events.

The solution:

Have some way, for example Systemd, or something similar if Debian stable doesn’t have it yet, constantly monitor the Tor process’ connection, and if it detects any drops, it would block Tor from reconnection attempts until sdwdate fetches a fresh valid consensus and sets the time appropiately.

The drop in our usecase, will be caused by suspending or shutting down the machine.

Posted it on git to keep things from being buried here.