[HOME] [DOWNLOAD] [DOCS] [NEWS] [SUPPORT] [TIPS] [ISSUES] [DONATE]

whonix-gateway not reachable

Now I updated to 4.19.47-1. But unfortunately the problem remains, nothing changed.

Try also 4.14.119-2 by editing your xen.cfg default= line if using UEFI
boot, or selecting it from the Grub boot menu. The two might be
unrelated of course, but I’m running 4.19.47-1 with the latest stable
patches in whonix-gw and whonix-ws with no problems. Are you on any
testing repositories?

2 Likes

The oldest kernel version I have is 4.14.116-1. I played around with it.
Setting the kernel version of sys-whonix-14 to 4.14.116-1 fixed the problem. This works with 4.14.116-1 and 4.19.47-1 kernels running in dom0 and anon-whonix.
So the problem seems to be in the kernel of sys-whonix-14.
I’m very happy to have a working whonix setup again. Thanks.
Otherwise it would be nice to be able to use a recent kernel.

Setting the kernel version of sys-whonix-14 to 4.14.116-1 fixed the problem.

Good, that will make troubleshooting easier. I wonder why my sys-whonix
works at 4.19.47-1, but not yours. What version of the whonix-gw-14
template do you have installed? Mine is 4.0.1-201901231238. Did you
install any patches from testing?

Running sudo dnf info qubes-template-whonix-gw-14:

Last metadata expiration check: 22:27:32 ago on Tue Jun 4 13:50:32 2019.
Installed Packages
Name : qubes-template-whonix-gw-14
Arch : noarch
Epoch : 0
Version : 4.0.1
Release : 201807171801
Size : 2.0 G
Repo : @System
From repo : qubes-dom0-cached
Summary : Qubes template for whonix-gw-14
URL : http://www.qubes-os.org
License : GPL
Description : Qubes template for whonix-gw-14

As far as I can say/remember I made no changes to the template, no additional packages and no change of repository configuration. Even if I did these changes should have been deleted through running:

sudo qubes-dom0-update --enablerepo=qubes-templates-community --action=reinstall qubes-template-whonix-gw-14

Or am I wrong about how this works?

That is an older version of the template. Did you perform that reinstall
recently? It should work, but might try it manually if it’s not
updating. You can clone whonix-gw to a temporary template and point
sys-whonix to it to manually reinstall without having to do it in the clear.

I manually remoced qubes-template-whonix-gw-14 using dnf and reinstalled it. Now my version is:

sudo dnf info qubes-template-whonix-gw-14
Last metadata expiration check: 2:40:05 ago on Wed Jun 5 14:16:09 2019.
Installed Packages
Name : qubes-template-whonix-gw-14
Arch : noarch
Epoch : 0
Version : 4.0.1
Release : 201901231238
Size : 1.9 G
Repo : @System
From repo : qubes-dom0-cached
Summary : Qubes template for whonix-gw-14
URL : http://www.qubes-os.org
License : GPL
Description : Qubes template for whonix-gw-14

I installed all updates in whonix-gw-14 (didn’t modified repo-settings, so should be stable).

The situation is unchanged: When sys-whonix-14 is booted with 4.14.116-1 kernel eveything works fine, when unsing 4.19.47-1 running whonixcheck in anon-whonix fails with the old error.

The situation is unchanged: When sys-whonix-14 is booted with 4.14.116-1 kernel eveything works fine, when unsing 4.19.47-1 running whonixcheck in anon-whonix fails with the old error.

How about Firefox in a Fedora based AppVM connected to sys-whonix on
4.19, does that work now? If not, I am running out of suggestions. Maybe
reinstall Qubes 4.0.1 if you’ve been upgrading from 4.0, unless someone
has a better idea.

fedora-based appvm is not able to access the internet though sys-whonix-14 with 4.19 kernel. :frowning:

I think that you are right about it being related to problems with updating from 4.0 to 4.0.1.
sudo dnf info qubes-release says:

Installed Packages
Name : qubes-release
Arch : noarch
Epoch : 0
Version : 4.0
Release : 8
Size : 157 k
Repo : @System
From repo : qubes-dom0-cached
Summary : Qubes release files
License : GPLv2
Description : Qubes release files such as yum configs and various /etc/ files
: that define the release.

But when running qubes-dom0-update it says “No new updates available”.

Do you know anything related to such issues due to kernel upgrade? @marmarek

4.0.1 is just 4.0 with all routine updates. There is no separate action of upgrading from 4.0 to 4.0.1. You can install either one, then fully update, and it results in the same system.

I think @awokd just meant that something might have gotten broken between your original 4.0 installation and the present. If so, he is suggesting that you might wish to reinstall. If you were to reinstall, it would make sense to use the most recent ISO, which is 4.0.1.

Perhaps this:

1 Like

I think @awokd just meant that something might have gotten broken between your original 4.0 installation and the present. If so, he is suggesting that you might wish to reinstall. If you were to reinstall, it would make sense to use the most recent ISO, which is 4.0.1.

Yes, sorry, meant that or maybe he is running one of the earlier RC
versions. Not sure what else could be different at the networking level.
Here is the other who was having the same problem with the newer kernel
and sys-whonix:
https://www.mail-archive.com/qubes-users@googlegroups.com/msg28628.html.

No idea. I’ve just tried: fully updated sys-whonix with 4.19.47 kernel and anon-whonix with 4.19.47 and it works for me (whonixcheck -v in anon-whonix says so). qubes-template-whonix-gw-14 package version is 4.0.1-201807171801.
Is your anon-whonix connected directly to sys-whonix? Do you see any failed services in sys-whonix (unlikely, as whonixcheck would detect that)? Check also if you see vifXX.0 interface in sys-whonix (some number instead of XX).

BTW looking at iptables -t nat in sys-whonix, I see redirection rules for 10.0.0.0/8 duplicated for each port (only for that IP range, 192.168.0.10 and 10.152.152.10 are listed only once). And also 10.152.152.10 is already handled by 10.0.0.0/8 so one or another is not needed.
Additionally, according to iptables-extensions manual, --to-ports is not necessary if you don’t change the port. It would allow to compact all those rules to very few - one for each port ranges. Like this:

iptables -t nat -A PREROUTING -i vif+ -d 10.152.152.10 -p tcp --dport 9152:9189 -j REDIRECT
1 Like
WORKSTATION_DEST_SOCKSIFIED="\
10.137.0.0/8,\
10.138.0.0/8"

Is working.

But even with only

WORKSTATION_DEST_SOCKSIFIED="10.137.0.0/8"

DispVMs are also working? How come?

And also somehow…

iptables --wait -t nat -A PREROUTING -i vif+ -d 10.137.0.0/8 -p tcp --dport 9168 -j REDIRECT --to-ports 9168

Results in:

-A PREROUTING -d 10.0.0.0/8 -i vif+ -p tcp -m tcp --dport 9168 -j REDIRECT --to-ports 9168

How come?

Nice. Done.

Oh, this is from where 10.0.0.0/8 come. /8 is a netmask, and everything not covered by it is ignored. 10.137.0.0/8 basically says “compare just first 8 bits of the address”. If you want to cover 10.137.*.*, but not other 10.x.*.*, then it should be 10.137.0.0/16.

1 Like

I’m not sure if https://github.com/Whonix/whonix-firewall/commit/fbf4ad487241bcd3cd0acd11b3faec864cdbbb85 is correct. On my qubes system, /lib/systemd/system/anon-ws-disable-stacked-tor_autogen__var_run_tor_socks.service redirects /var/run/tor/socks to 10.152.152.10:9050

1 Like

Awesome! Fixed.

Indeed. That would have lead to breakage.

It’s hard to use the actual Qubes-Gateway IP there:

  • it’s defined in a source file
  • unknown at build time
  • dynamic per Qubes installation

re-added:

I bumped into this a couple of weeks ago, I too think it was after a Qubes kernel update. I didn’t investigate until today, but what I found out was that no IP address is set on vif interfaces when attached, it seems like /usr/lib/qubes/setup-ip is not triggered (I put ‘thouch /tmp/thing’ right under the hash bang and no such file was created a new VM and vif was attached).

With ‘dmesg -w’ I saw UDEV/KERNEL lines with add+online and offline+remove, but none with bind or unbind.

With ‘udevadm info --attribute-walk /sys/class/net/vifXX.0’ I saw ATTRS(state)==“InitWait” where it should be “Connected”

Manually setting the same IP on the attached vifXX.0 as on the other interfaces, ‘ip address add dev vifXX.0 10.137.0.17/32’, enables the Whonix gateway to route traffic from connected VM through Tor.

I have not yet tried with a regular Debian (Stretch) template as netvm to see if the problem is the same. There are no issues with a netvm with Fedora 29 or 30 as template.

That’s it! Somehow I’ve missed your comment, but also now it happened to me and I’ve found out the same.
Configuring vif* interfaces is responsibility of /etc/xen/scripts/vif-route-qubes script called by xendriverdomain service, not udev. It fails, with logs in /var/log/xen/xen-hotplug.log. I see a single error there:

RTNETLINK answers: Permission denied

After adding set -x at the beginning of the script, I see it’s on a try to set IPv6 address. Probably because IPv6 is disabled in Whonix.
Indeed I have enabled IPv6 in my system globally (https://www.qubes-os.org/doc/networking/#ipv6). The same page explains how to disable it for particular VM (and others connected to it). So, I’ve executed qvm-features sys-whonix ipv6 '', restarted sys-whonix and now it works.

1 Like

So, I’ve executed qvm-features sys-whonix ipv6 '', restarted sys-whonix and now it works.

That explains it. I had disabled ipv6 for Whonix with the above
command several months ago.

1 Like

Is it possible (preferably) to fix this in /etc/xen/scripts/vif-route-qube? Created

for it.

That would also be an alternative solution for that could be added to Whonix salt but which than later could backfire if/when we implement IPv6 Whonix side. (although not on the horizon mid term https://phabricator.whonix.org/T509)

(related: https://github.com/QubesOS/qubes-mgmt-salt-dom0-virtual-machines/tree/master/qvm)