Hey Patrick,
Thanks for the literature. I will most certainly consume it.
I suspect that I am probably being overly cautious then. My interest in Tor is from an academic perspective so I certainly could be over thinking this.
Just so I’m clear about my point regarding MTU, whilst the ICMP protocol allows dynamically scaling back the MTU that is sent based on the request, the data would then be sent over TCP for the rest of that stream.
But I guess if none of the Tor project literature suggests this is a concern then I’m almost certainly over thinking it
As for some thoughts on a fix…
If the MTU issue is on the Whonix users side (where the user is connecting from) then I would personally consider it best that the user configures their system to work around that with a static MTU. This would likely be beyond the knowledge of the average user though, especially someone new to Linux, so perhaps automation is required to allow a seamless connection for those users.
[On that note I feel a command could be run during the KVM set up that would ascertain the users max MTU size and configure this before the first time the gateway is run, which would alleviate the users problem.]
If the MTU issue is on the other end (that the user is connecting to), which in this case would be a guard node, then it would require PMTUD to agree on the MTU size, or fallback to attempting to connect to another guard node. However I suspect most (if not all) guard nodes will be hosted on a datacenter connection where they will have a full 1500 MTU size.
What I’ve looked at since my last post 8 days or so ago
I checked out the iptables rules that Tails is using and there is no ICMP allowed on INPUT, yet it’s able to connect just fine.
My thoughts are that Tails does not require this because users with a lower MTU have a router performing MSS Clamping which takes care of the MTU at the users network side without necessitating any ICMP.
What I’ll do (outstanding still to do)
I’ll look to commit my conditional RELATED fix with the nftables rules to git.
Can I ask, is there a confirmed way that I can get nftables running to test this?
What I’ll do (further testing)
I should have some time this coming weekend to confirm if this issue is specifically KVM related or if it also applies to a gateway running via VirtualBox and/or physical isolation on bare metal.
If it is purely an issue that users on KVM will experience then I think a solution to fix it during the KVM provisioning stage is best, but if it affects users across the board then a fix at the Whonix gateway level is likely to be more desirable.
I note that we now have two proposed ICMP solutions as well.
Solution 1:
IPTABLES -I INPUT -p icmp —icmp-type destination-unreachable -m state —state RELATED -j ACCEPT
Solution 2:
iptables -I INPUT -p icmp --icmp-type fragmentation-needed -m state --state RELATED -j ACCEPT
I’m not immediately sure which is best, or if both should become optional conditions and leave it up to the user.
I’ll re-run tests on my end to ensure that both the destination-unreachable and fragmentation-needed rules do fix the issue that I found and reported here.