Kicksecure Network Configuration

Let me know when you have a buildable branch with these changes. New releases are overdue with Tor’s new DoS fixes

1 Like

Maybe this could help you; this is a list of required dependencies and recommendations from this exact package on an Ubuntu system I maintain:

network-manager-gnome (requirements and recommendations)

Required Dependencies:
libappindicator3-1
libatk1.0-0
libc6
libcairo2
libgdk-pixbuf2.0-0
libglib2.0-0
libgtk-3-0
libjansson4 (various versions depending on distro)
libmm-glib0 (various versions depending on distro)
libnm0
linnma0
libnotify4
libpango-1.0-0
libpangocairo1.0-0
libsecret
libselinux
dconf-settings-backend, gsettings-backend
network-manager
policykit-1-gnome
dbus-session-bus

recommends:
notification-daemon
gnome-keyring
mobile-broadband-provider-info
iso-codes

2 Likes

dhcpcanon systemd unit fails at boot due to missing debhelper apparmor integration
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=956626

2 Likes

Will remove dhcpcanon because broken anyhow. And no reaction from upstream.

Also dhcpcanon is not integrated with NetworkManager therefore not used anyhow.


Will also remove ifupdown configuration file /etc/network/interfaces.d/30_kicksecure because we’re now using NetworkManager and having that config file would make eth0 unmanaged by NetworkManager.

If anyone has better ideas for Kicksecure host network configuration let me know.


related:

What about?

?

Needed for wifi support


For dhcp client it comes down to dhclient vs dhcpcd

Per [Solved] Confused about dhclient and dhcpcd / Networking, Server, and Protection / Arch Linux Forums NetworkManager uses dhclient by default to manage dhcp leases so this is what we should use.

1 Like

dpkg -l | grep dhc shows no packages. And allegedly network-manager has its own internal DCHP client included. Therefore not changing any packages.

There appears to be several areas where we could improve networking performance across all systems while not comprimising security (even perhaps increasing).

Based on including subsections “Increasing the size of the receive queue” → “Enable MTU probing” from the Arch wiki:
https://wiki.archlinux.org/title/sysctl#Improving_performance

Some of these will demand more CPU and memory usage on high bandwidth systems.

Additionally, the “Tweak the pending connection handling” section highlights how the system can be made more robust to DoS attacks in combination with kernel hardening settings we are already using.

I have been using these parameters and the default and have noticed some decrease in latency.

Thoughts on including any of them?

1 Like

I guess if primarily for better performance, that would be best kept outside security-misc and placed in a separate package performance-misc or so.

1 Like

Yes the primary benefits would be network performance. Especially for whonix users given the slowness of Tor. People using typical VPNs may also benefit with having increased UDP limits (as these services scarcely use the TCP protocol, at least by default).

The added protections against DoS attacks would also be quite effective as they are going to be combined with existing sysctl parameters in security-misc.

I have committed the changes on a local branch shown below:

Two possible strategies moving forward are:

  1. Create performance-misc and include these changes, or
  2. If the creation of performance-misc is premature, merge into security-misc for now and separate the changes later.

For Kicksecure there isn’t any network level fingerprinting concern.

For Whonix there is but probably the other security hardening might already worsen the network fingerprint at ISP level.

Probably a lot better.

Since performance isn’t related to security. Also security-misc is already doing lots of things which risks breaking corner cases. Mixing it even more would make it harder to debug.

1 Like

It’s not fingerprinting that concerns me, but the security implications which are still valid for KS.

Disabling MTU probing was necessary along with some other TCP options to mitigate the SACK vuln. You might say that it is fixed and water under the bridge, but the argument still stands that the less functionality turned on in the kernel the safer and really that’s the only thing we care about here as long as performance is usable and acceptable.

If your system is prone to these TCP SACK PANIC vulnerabilities, you need to take quick action by disabling the vulnerable component. Alternatively, you can use iptables to drop connections whose MSS size can successfully exploit the vulnerability. The second is more effective as it mitigates the three vulnerabilities. To prevent connections with low MSS, use the following commands for traditional iptables firewalling (Note: You need to disable net.ipv4.tcp_mtu_probing for this fix to work effectively). This drops all connection attempts whose MSS size ranges between 1 and 500.

Other articles about TCP stack problems and mitigations. I think this should be incorporated in security-misc if not already.

2 Likes

Agreed, I will revert the enabling of MTU probing when a ICMP black hole is detected.

However, other proposed settings relating to connection handling would harden against DoS attacks.

Regarding networking performance being usable and acceptable. I believe that is very subjective since many people do not have gigabit connections. Speedtest’s global median fixed broadband speed in August 2022 is ~70mbps. Assuming multiple other household devices/users it is reasonable to assume the median is much lower.

Combining this with slowness ToR, I think providing end users the maximum possible networking speeds is advantageous provided zero security compromises are made. Excluding the MTU probing setting, I think the other proposed changes appear to only have a networking performance upside.

1 Like

The list proposed in sysctl - ArchWiki is huge. One shouldn’t blindly turn options on and hope for the best. Some of the stuff on there like TCP Timestamps is being recommended for security, but we know all too well the downsides. It will take a lot of time to research every parameter on there such as Google’s BBR routing. Some of these may have security or anonymity consequences yet to be discovered because there isn’t enough interest or eyes on the code. We don’t have time at the moment to look into it.

1 Like

I understand and yes the list on the Arch wiki is huge. However, I am not at all suggesting that all the sysctl’s should be copy pasted, especially not enabling TCP timestamps or BRR routing.

Instead, what I was proposing (and commited) was a far smaller list of curated inclusions. These largely involve increasing total connection limits and allocated memory while making TCP connections more robust. Though due to time limitations, this topic can certainly be revisited at a later date.

1 Like

Posted on social media too.

1 Like

Decided to close github draft PR.

The additions to the to the README.md are:

## Network optimisation

* Increases the size of the receive queue and the number of maximum connections.

* Increases memory dedicated to the network interfaces.

* Raises the default UDP limits.

* Enable TCP Fast Open to reduce network latency.

* Raise the number of pending connections in order to be more resistant to simple DoS attack.

* Disables TCP slow start after idle.

* Reduces the TCP keepalive time.

Creation of the file etc/sysctl.d/30_network-opt.conf containing:

## Copyright (C) 2019 - 2023 ENCRYPTED SUPPORT LP <adrelanos@whonix.org>
## See the file COPYING for copying conditions.

## Improvements in networking performance largely based on Arch's recommendations
## https://wiki.archlinux.org/title/sysctl#Improving_performance

## Increasing the size of the receive queue
net.core.netdev_max_backlog=16384

## Increase the maximum number of connections
net.core.somaxconn=8192

## Increase memory dedicated to the network interfaces
## Set max cache size (in bytes) to 16MB
## These settings are for extremely fast connections and likely allocate excessive memory for typical networks
## https://blog.cloudflare.com/the-story-of-one-latency-spike/
## https://github.com/redhat-performance/tuned/blob/master/profiles/network-throughput/tuned.conf#L10
## https://nateware.com/2013/04/06/linux-network-tuning-for-2013/
net.core.rmem_default=1048576
net.core.rmem_max=16777216
net.core.wmem_default=1048576
net.core.wmem_max=16777216
net.core.optmem_max=65536
net.ipv4.tcp_rmem=4096 1048576 2097152
net.ipv4.tcp_wmem=4096 65536 16777216

## Increase the default UDP limits
net.ipv4.udp_rmem_min=8192
net.ipv4.udp_wmem_min=8192

## Enable TCP Fast Open for both incoming and outgoing connections
## https://www.keycdn.com/support/tcp-fast-open
net.ipv4.tcp_fastopen=3

## Raise maximum queue length of pending connections
net.ipv4.tcp_max_syn_backlog=8192

## Raise maximum number of sockets in TIME_WAIT state
net.ipv4.tcp_max_tw_buckets=2000000

## Let TCP reuse an existing connection in the TIME-WAIT state
net.ipv4.tcp_tw_reuse=1

## Seconds to wait for a final FIN packet before the socket is forcibly closed
net.ipv4.tcp_fin_timeout=10

## Disable TCP slow start
## https://en.wikipedia.org/wiki/TCP_congestion_control#Slow_start
net.ipv4.tcp_slow_start_after_idle=0

## Change TCP keepalive parameters
## Reduces the TCP keepalive period from 2 hours to 2 minutes
## https://en.wikipedia.org/wiki/Keepalive#TCP_keepalive
net.ipv4.tcp_keepalive_time=60
net.ipv4.tcp_keepalive_intvl=10
net.ipv4.tcp_keepalive_probes=6
1 Like