Safer DHCP implementation [RESOLVED]

I got an idea when seeing this thred:

http://forums.dds6qkxpwdeubwucdiaord2xgbbeyds25rbsgr73tbfpqpt4a6vjwsyd.onion/t/do-i-need-an-unique-lan-ip-for-the-each-cloned-workstation-if-i-dont-use-them-at-a-time/7496/2

@Patrick what if we include the DHCP server on the WS instead and configure its service to terminate if it detects an IP other than 10.152.152.11?

This avoids race conditions between multiple instances sharing the same internal LAN. Also provides a more secure place to put the daemon. It doesn’t break the sec model because we document the best way to isolate WSs is to have them on separate LANs so even if it is exploited it is kinda expected.

TO-DO: Figure out if the GW can be made immune to dynamic assignment. I guess not exposing a DHCP client guarantees that?

1 Like

Which DHCP client would be connecting to it? Another Whonix-Workstation?

Yes

Code would be something like this:
If 10.152.152.11 -> DHCP server starts -> If DHCP server starts DHCP client is prevented from running.

1 Like

Since all downloadable Whonix images are alike, the problem here is, how to detect in code which is the master Whonix-Workstation running the DHCP server?

Maybe the first Whonix-Workstation that starts and doesn’t find an already running DHCP server would start a DHCP server? Some room for race conditions. But ignoring race conditions for simplicity, that is easier said than implemented.

So it would still be a manual edit some config file process of setting up a multi workstation setup? Otherwise - since all downloadable Whonix images are alike - how would there be any other IP than 10.152.152.11 after first boot without manual change by user?

A wholly different solution I had in mind would be Any-Workstation-VM → DHCP-Server-VM → Whonix-Gateway-VM. But developing that is also a lot work.

Also, always good to ask: Why are we inventing something here? Why is this Whonix specific?
This is actually a missing (or existing in KVM?) virtualizer feature. Qubes instructions for multi ws are a lot simpler since Qubes automatically puts each ws into its own separate network and dynamically spins up a vif+ interface on Whonix-Gateway for each ws. Virtualizer feature request?

No automated. A script to query the eth0 interface for its IP. IP is assigned pretty early on in a VM’s life so it should be clear if it needs to start the server or the client without delaying connectivity too much.

Since it is based on introspection and is automated, it should be able to dynamically react to network conditions .

Overkill and would run into the same questions this solves except now there is more dev effort and resource use.

You mean for a network that looks like

WS1 ↔ GW ↔ WS2
where WS1 and WS2 cannot communicate but share the same GW?

Does the GW on Qubes have two different internally facing NICs in this case?

UPDATE:
Indeed the feature exists when one defines a DHCP setting for the virtual network. No need for the sorcery in this thread. I might test and document it though I won’t enable it by default becuase it might increase theoretic attack surface.

@Patrick let me know if the selected DHCP range is OK.

(To be added on wiki) Instructions on using DHCP with KVM:

sudo nano /etc/network/interfaces.d/30_non-qubes-whonix

Comment out:
auto eth0
iface eth0 inet static

Comment in:
auto eth0
iface eth0 inet dhcp

Change internal network:
sudo virsh net-edit Whonix-Internal

<ip address='10.152.152.0' netmask='255.255.192.0'>
    <dhcp>
      <range start='10.152.128.1' end='10.152.191.254'/>
    </dhcp>
</ip>

Restart internal network:

sudo virsh net-destroy Whonix-Internal

virsh -c qemu:///system net-start Whonix-Internal

sudo ifconfig confirms dynamic assigned IP functional.


I read the manual and a default install dnsmasq does forward requests to upstream servers recursively if it cannot resolve them.

However there is evidence that it does not resolve DNS as implemented in libvirt:

On linux host servers, libvirtd uses dnsmasq to service the virtual networks, such as the default network. A new instance of dnsmasq is started for each virtual network, only accessible to guests in that specific network.

  • DNSMASQ is visible to nmap scan from the WS but not much else.

  • Sent a DNS request to it from the WS with this result:

    dig microsoft.com @10.152.152.0

    ; <<>> DiG 9.11.5-P4-3-Debian <<>> microsoft.com @10.152.152.0
    ;; global options: +cmd
    ;; connection timed out; no servers could be reached


Let’s decide whether we want this feature by default or simply making it optional and documenting it. Can VBox support this too? Maybe enable it in a set of packges for KVM builds?

  1. Compatible with manually set static IPs in case this is needed for
    something?

  2. What about Tor config file based static IPs according our onion
    services instructions aka HiddenServicePort 80 10.152.152.11:80? Would
    this break? See also next question.

  3. How dynamic are the IPs? Ideally, the internal LAN IPs would be
    automatically, dynamically assigned at first boot but then kept forever.

For this purpose there could be a unique (randomly chosen) ID from which
the IP is derived so it’s unlikely that two VMs booted for the first
time (while another one is offline) end up with the same IP.

(A booted VM vs another booted VM always has a unique fingerprint
compared to each other - different time stamps in logs and whatnot)

HulaHoop via Whonix Forum:

You mean for a network that looks like

WS1 ↔ GW ↔ WS2
where WS1 and WS2 cannot communicate but share the same GW?

Yes.

Does the GW on Qubes have two different internally facing NICs in this
case?

Yes, as many additional vif+ interfaces as (vif1, vif2, vif3, …)
dynamically created as needed per separate WS.

Let’s decide whether we want this feature by default or simply making it optional and documenting it.

Can VBox support this too?

No idea but if it had and this could be enabled by default for Whonix
VBox and KVM builds at the same time, without KVM-only specifics, then
this would increase my motivation to implement this.

Maybe enable it in a set of packages for KVM builds?

Patches welcome.

Yes libvirt allows pairing a static IP from athe DHCP server to a VM with a specific MAC address. I would need to ship a VM with a preconfigured MAC and adjust the networking file to always assign it 10.152.152.11

With the explicit static stuff above no it shouldn’t.

Haven’t noticed will need to test again.

1 Like

This setup is actually superior unless the user insists on having the multi WS communicate. Do you also spawn a separate Tor daemon for each interface? How hard would it be to do that? It would have thesecurity advantages of multiple GWs while cutting down on resource use.

1 Like

HulaHoop via Whonix Forum:

This setup is actually superior unless the user insists on having the multi WS communicate.

having the multi WS communicate is supported as optional feature as per:

Firewall | Qubes OS

Do you also spawn a separate Tor daemon for each interface?

No.

How hard would it be to do that?

Hard. Partially reinventing Qubes.

It would have the security advantages of multiple GWs while cutting down on resource use.

Qubes does not have some/most of the Non-Qubes-Whonix disadvantages [no
unwanted default inter ws communication] of
Multiple Whonix-Workstation but it doesn’t
automate
Tor Entry Guards - Whonix.

1 Like

So I can see an open TCP port. However it responds as if it’s “tcpwrapped”. That implies if you connect over a different interface from virbr0 , dnsmasq closes the connection without reading any data. So data you send to it doesn’t matter; it can’t e.g. exploit a classic buffer overflow.

This comment is about DNS imlications for the host as DHCP is bound to a specific interface.



DNS not explicitly enabled fir guests unless added to a network’s config.


1 Like