Long Wiki Edits Thread

Right.

What about just for updates configuring R4 to use an ephemeral gateway? More secure? Good idea or not?

Also, maybe it’s a privacy vs security tradeoff again i.e. always running a non-persistent gateway makes you more trackable (Tails-like), but makes it very unlikely your sys-whonix is ever infected for more than one session, because it’s disposable?

Contrast that to someone running the same sys-whonix day in day out that still has writable /rw and /home directories that can be screwed with.

1 Like

torjunkie:

Right.

What about just for updates configuring R4 to use an ephemeral
gateway? More secure? Good idea or not?

Non-persistent entry guards are discouraged in most cases with the same
reasoning. Updates vs non-updates makes no difference.

Also, maybe it’s a privacy vs security tradeoff again i.e. always
running a non-persistent gateway makes you more trackable
(Tails-like), but makes it very unlikely your sys-whonix is ever
infected for more than one session, because it’s disposable?

Yes.

Dear @torjunkie please note one thing. Qubes Disposables still applies. Most noteably, the following ticket…

Is still not implemented. Could you please kindly clarify the footnote at Advanced Security Guide - Whonix a bit?

Unless the user utilizes ephemeral Whonix DisposableVMs for both the Whonix-Workstation and Whonix-Gateway, which is available in Qubes R4.

I don’t think most people and/or the people capable in theory to work on security have any idea how many construction sites are open and urgently need work.

Draft for a research wanted blog post.

We need the blog post, but then we also need to publicize it widely on all mailing lists, social media, etc. etc.


subject:

extracting encryption keys from powered off notebooks


Let’s take for example Wipe RAM on Shutdown. It’s not a default feature in Debian or Qubes, even how to manually do it is undocumented.

When we think a notebook is powered off, is it really powered off? It’s not. And this may aid cold boot attacks.

Most people think there is only the BIOS and the operating system running on the computer. But there is also Intel ME and whatnot. It’s not well understood.

The on/off switch of notebooks is clearly not a software button. Evidence that it is a software and not a hardware button light a usual room light switch that physically cuts power:

  • when hard powering off (just power off the running operating system) I have to press the power button for 3 seconds. Pressing it just 1 second would result in ACPI shutdown. 3 Seconds results in instant power off.

  • sometimes the power on does not work - I first have to remove the power cable and re-attach it. It’s a bug of whatever operating system running below the operating system we know running.

As long as a battery and/or power cable is connected to a notebook (or as long as a power cable is connected to a computer), are we sure that the RAM is really powered off? And is this true for all brands?

Research is needed. Using the power off function of the operating system. Then waiting for a few minutes. Then boot from a USB and do a RAM dump. See which traces can be recovered similar to the cold boot attack.


With a battery attached probably not but that needs more testing/research.

It gets interesting with new developments like Intel’s NVRAM (non-volatile) which is basically an SSD introduced in server hardware for performance.

  1. → All done/fixed.

I’ll keep looking at the Advanced Security Guide (finish edits), so we can finally shift all the stuff around as per phabricator item.

  1. Also, re: this phabricator item:

⚓ T728 Tor 0.3+ does not work on QubesOS (but Tor 0.2.9.10 works)

Tor 0.3+ does not work on QubesOS (but Tor 0.2.9.10 works)

That is false.

3.1.7 and 3.1.8 from Debian work. Including with connection padding.

However, they tend to time out on the first connection attempt i.e. getting stuck at the 90% mark “Establishing a Tor circuit”. However, after whonixcheck --debug --verbose (one or two times), they eventually connect.

It usually resolves itself (bootstraps 100%) after this line stops shitting itself in Tor logs:

[NOTICE] New control connection opened from 127.0.0.1. [X duplicates hidden]

I think it probably relates to slow connections at this time. There seem to be a number of Tor Trac bugs around similar issues, ‘descs’ failing to download properly etc.

I don’t think this is Whonix-specific, but relates to latest Tor releases. So, maybe close it.

  1. According to this:

connection - been tampered? been filtered ? or just a network settings messing around? - Tor Stack Exchange

These lines in the bridges tor config file:

ClientTransportPlugin obfs2,obfs3 exec /usr/bin/obfsproxy managed
ClientTransportPlugin obfs4 exec /usr/bin/obfs4proxy

Can be combined into one line like this:

ClientTransportPlugin obfs3,obfs4 /usr/bin/obfs4proxy

Haven’t tested it, but canonizing ironize seems very knowledgeable.

1 Like

Needed advice…

  • be careful when posting logs (from Konsole etc.) - it could contain the operating system user name (some people use their real names there) as well as other identifying information (when posting logs that contain hardware serials and what not)
  • be careful when making screenshots (related to above) (also so the background or Surfing Posting Blogging - Whonix does not make trouble)
  • using something non-identifiable for operating system real name and user name

Could you add it to the DoNot page or elsewhere?

I see in Tips on Remaining Anonymous: Difference between revisions - Whonix you’ve added

Great! :slight_smile:

Could you please add another item “Do not photograph your screen”? Reasons:

  • Surfing Posting Blogging - Whonix
  • reflections on the screen (possibly so minor that these cannot be seen with the blank eye but with graphical manipulation tools / magnifiers)?
  • risk of photographing something more than the screen (visible background)
  • leaking date / time (day light), thereby hint timezone and approximate location
  • possible fingerprints on the screen?

Done! :slight_smile: (+metadata page)

Sorry about the slow edits these days - should have more time in a few weeks.

1 Like

I think we need to mention something about rotating your sys-whonix every few months, and deleting the old one for higher security / privacy. Easy in standard Whonix.

But for Qubes, I guess you’d just have for normal steps, create a new sys-whonix-1, change all templates currently linked to sys-whonix for updates to point to sys-whonix-1, set sys-whonix-1 as the proxy-VM and make sure VM settings mirror old net settings and “start on boot” setting, test it works, and delete old sys-whonix and all done? Haven’t bothered trying before.

I just don’t trust that 25K users on the Linux fringe wouldn’t be targeted as interesting users. I think it makes logical sense. And the wiki points out that you’re screwed if the GW is compromised i.e. access to Tor data and what not.

1 Like

Having this documented sounds good. However, it conflicts with Tor entry guards. So perhaps restore Tor’s state - unless that is complex and conflicting with the original purpose?

Potential Qubes Bug:

  1. Create a new sys-whonix (sys-whonix-clone ProxyVM)
  2. Point all Template VMs to sys-whonix-clone
  3. Edit sys-whonix-clone template settings to match current sys-whonix
  4. Edit current sys-whonix VM settings to not start automatically on boot
  5. Reboot sytem.

Expected:

Only sys-whonix-clone ProxyVM automatically boots (along with sys-net & sys-firewall).

Actual behavior:

Both sys-whonix-clone and sys-whonix boot on start-up.

Comment:

Probably sys-whonix is hardwired somewhere in Qubes such that if the Whonix ProxyVM is scheduled to start on boot, it searches for a VM named “sys-whonix” and starts it automatically.

Probable workaround solution:

Just delete sys-whonix, rename the sys-whonix-clone to sys-whonix & reboot.

Rationale for semi-regular sys-whonix rotation:

  • Probability of specialist software attacking platforms like Qubes is high. Particular after Snowden endorsed it.
  • Likely that specific, tell-tale Qubes signatures have been hard-wired into passive surveillance software.
  • Expected that said software undertakes automated attacks on network facing VMs which are based on the weakness of writable directories, known Xen/VM/Tor/kernel flaws, and pissant network protocols/hardware/software/firmware to establish an infection.
  • Subsequently launch covert attacks on other running VMs - eventually daisy hopping to infect the majority of VMs, and so on.
  • sys-whonix is therefore an obvious and logical central target if the shadow state is my employer.

PS Test of another sys-whonix creation confirms that it generates a new set of Tor entry guards. I don’t think that is a bad thing, because if a user distrusts the current sys-whonix, it could be for various reasons:

  • Evidence in logs of attacks upon the Tor or Whonix processes e.g. AppArmor goes batshit crazy, Tor messages talk about potential routing attacks etc.
  • Current guards are wholly unreliable or very slow.
  • Other suspicious behaviour e.g. unusual Gateway behaviour, unexplained crashes of the VM, unusual files/software installations found in the Gateway etc.
  • Just generally concerned about too much Tor data having been accumulated re: sensitive activities, that a fresh gateway is warranted in the circumstances.

The fact is that if this is only done every month or two, then it is not that far off the usual guard rotation (which I think was 3 months?), and is nothing like Tails, where random entry points are occuring all the time. So, the anonymity set reduction isn’t too bad.

I don’t see any particular reason to try and bring the old Tor settings / data over either, particularly if you want a fresh slate.

1 Like

Qubes TemplateVMs are now non-networked in R4 so you shouldn’t point
them to either sys-whonix or sys-whonix-clone. (They’re using Qubes
qrexec based updates proxy.) It’s worth mentioning but besides the point
you made.

It’s true. It’s hardcoded. And it needs documentation.

Could you have a look in this folder please?

qubes-mgmt-salt-dom0-virtual-machines/qvm at master · QubesOS/qubes-mgmt-salt-dom0-virtual-machines · GitHub

Then try to read the files which mention whonix?

For example
https://github.com/QubesOS/qubes-mgmt-salt-dom0-virtual-machines/blob/master/qvm/template-whonix-gw.sls
(at the bottom) might explain where it is hardcoded and might also
explain how to have a sys-whonix-clone being used. Or how to clone a
whonix (or any) template and have it upgraded through a sys-whonix-clone.

Would appreciate very much if you could document this! This will
certainly be an asked question in the future…

torjunkie:> The fact is that if this is only done every month or two,
then it is

not that far off the usual guard rotation (which I think was 3
months?), and is nothing like Tails, where random entry points are
occuring all the time. So, the anonymity set reduction isn’t too
bad.

Needs a bit more research. They are now used “much longer” than before.
Could you please try to extract the current algorithm (maybe from

or by asking TPO) and document that?

Reminds me Qubes salt is undocumented.

Later enabling “updates-via-whonix” is just one salt command or so.

new chapter:

How-to: Ledger Hardware Wallet in Kicksecure ™

OK - will have a look, but I’m not technical like you guys :wink:

Will try and find out the rotation parameters too.

1 Like

Well, a search for sys-whonix shows the following relevant areas; not sure which one matters - but it could be useful to know where its hardcoded everywhere for other purposes. sys-whonix in bold.

  1. [qubes-mgmt-salt-dom0-virtual-machines/qvm/anon-whonix.sls at master · QubesOS/qubes-mgmt-salt-dom0-virtual-machines · GitHub anon-whonix.sls]

19 include:
20 - qvm.template-whonix-ws
21 - qvm.sys-whonix
22 - qvm.whonix-ws-dvm
23
24 {%- from “qvm/template.jinja” import load -%}
25
26 {% load_yaml as defaults -%}
27 name: anon-whonix
28 present:
29 - template: whonix-ws
30 - label: red
31 prefs:
32 - netvm: sys-whonix
33 - default-dispvm: whonix-ws-dvm
34 tags:
35 - add:
36 - anon-vm
37 require:
38 - pkg: template-whonix-ws
39 - qvm: sys-whonix
40 - qvm: whonix-ws-dvm
41 {%- endload %}

  1. [qubes-mgmt-salt-dom0-virtual-machines/qvm/sys-whonix.sls at master · QubesOS/qubes-mgmt-salt-dom0-virtual-machines · GitHub sys-whonix.sls]

5 # qvm.sys-whonix
6 # ==============
7 #
8 # Installs ‘sys-whonix’ ProxyVM.
9 #
10 # Pillar data will also be merged if available within the qvm pillar key:
11 # qvm:**sys-whonix**
12 #
13 # located in /srv/pillar/dom0/qvm/init.sls
14 #
15 # Execute:
16 # qubesctl state.sls qvm.sys-whonix dom0
17 ##
18
19 include:
20 - qvm.template-whonix-gw
21 - qvm.sys-firewall
22
23 {%- from “qvm/template.jinja” import load -%}
24
25 {% load_yaml as defaults -%}
26 name: sys-whonix
27 present:
28 - template: whonix-gw
29 - label: black
30 - mem: 500
31 prefs:
32 - netvm: sys-firewall
33 - provides-network: true
34 - autostart: true
35 require:
36 - pkg: template-whonix-gw
37 - qvm: sys-firewall
38 {%- endload %}
39
40 {{ load(defaults) }}

  1. [qubes-mgmt-salt-dom0-virtual-machines/qvm/sys-whonix.top at master · QubesOS/qubes-mgmt-salt-dom0-virtual-machines · GitHub sys-whonix.top]

1 # -- coding: utf-8 --
2 # vim: set syntax=yaml ts=2 sw=2 sts=2 et :
3
4 base:
5 dom0:
6 - match: nodegroup
7 - qvm.sys-whonix

  1. [https://github.com/QubesOS/qubes-mgmt-salt-dom0-virtual-machines/blob/master/qvm/template-whonix-gw.sls template-whonix-gw.sls]

19 whonix-gw-update-policy:
20 file.prepend:
21 - name: /etc/qubes-rpc/policy/qubes.UpdatesProxy
22 - text:
23 - whonix-gw $default allow,target=sys-whonix
24 - whonix-gw $anyvm deny

  1. [https://github.com/QubesOS/qubes-mgmt-salt-dom0-virtual-machines/blob/master/qvm/template-whonix-ws.sls template-whonix-ws.sls]

19 whonix-ws-update-policy:
20 file.prepend:
21 - name: /etc/qubes-rpc/policy/qubes.UpdatesProxy
22 - text:
23 - whonix-ws $default allow,target=sys-whonix
24 - whonix-ws $anyvm deny

  1. [qubes-mgmt-salt-dom0-virtual-machines/qvm/updates-via-whonix.sls at master · QubesOS/qubes-mgmt-salt-dom0-virtual-machines · GitHub updates-via-whonix.sls]

1 # -- coding: utf-8 --
2 # vim: set syntax=yaml ts=2 sw=2 sts=2 et :
3
4 ##
5 # qvm.updates-via-whonix
6 # ===============
7 #
8 # Setup UpdatesProxy to always use sys-whonix.
9 #
10 # Execute:
11 # qubesctl state.sls qvm.updates-via-whonix dom0
12 ##
13
14
15 default-update-policy-whonix:
16 file.prepend:
17 - name: /etc/qubes-rpc/policy/qubes.UpdatesProxy
18 - text:
19 - $type:TemplateVM $default allow,target=sys-whonix

  1. [https://github.com/QubesOS/qubes-mgmt-salt-dom0-virtual-machines/blob/master/qvm/whonix-ws-dvm.sls whonix-ws-dvm.sls]

19 include:
20 - qvm.template-whonix-ws
21 - qvm.sys-whonix
22
23 {%- from “qvm/template.jinja” import load -%}
24
25 {% set gui_user = salt[‘cmd.shell’](‘groupmems -l -g qubes’) %}
26
27 {% load_yaml as defaults -%}
28 name: whonix-ws-dvm
29 present:
30 - template: whonix-ws
31 - label: red
32 prefs:
33 - netvm: sys-whonix
34 - template-for-dispvms: true
35 - default-dispvm: whonix-ws-dvm
36 tags:
37 - add:
38 - anon-vm
39 features:
40 - enable:
41 - appmenus-dispvm
42 require:
43 - pkg: template-whonix-ws
44 - qvm: sys-whonix
45 {%- endload %}

1 Like

Correct. Can you make head or tail of it what files are / have to be modified and how?

Not really! :smile:

But I’ll tell you one thing that doesn’t work (someone had to be the guinea pig). Re:

Probable workaround solution:

Just delete sys-whonix, rename the sys-whonix-clone to sys-whonix & reboot.

Result is:

  1. the clone which was renamed to sys-whonix doesn’t boot automatically (but you can start it manually after hitting the normal Qubes desktop).
  2. when you try to run dom0-updates over whonix, you get the message “UpdateVM not set, exiting”
  3. this is fixed by:

qubes-prefs --set updatevm sys-whonix

  1. templateVM updates still work, including over .onions

  2. GUI templateVM settings don’t show the current NetVM actually in use, but an incorrect value. But when you scroll up, you can see it is all still all configured properly with the “(current)” tag set to all the right VMs.

So, in short, not recommended. Obviously sys-whonix is much harder coded than some renamed clone with an identical name…

Anyway, this is a big problem as it stands now i.e. if a Qubes-Whonix user has a sys-whonix that becomes corrupted, or simply won’t bootstrap, ever, then they have to do a lot of manual fiddling on every reboot.

Reason: because their brand spanking new sys-whonix either won’t boot automatically, or if they kept the old one, it will always boot, not matter what VM settings they change.

The fact this has never been mentioned tells me that not a single person has every rotated their sys-whonix in the community. That’s an awful lot of trust they’re putting in its integrity over many months.

1 Like

Theory: Create a VM and set it to autostart. Clone it. It won’t inherit the autostart property. Could you please test if that is true and then open a bug against Issues · QubesOS/qubes-issues · GitHub if that is unexpected behavior?

When you delete the VM which is currently set set UpdateVM, what would you expect to happen? To keep the setting at the now non-existing setting?

To regenerate sys-whonix, is there a way to regenerate its private image? @marmarek

Or perhaps you delete sys-whonix and then just re-run all the salt grains to have it re-setup for you?

Do you know, TemplateVMs are now non-networked in Qubes R4? SO I would expect it to be set to none. Right?

It requires disabling autostart for sys-whonix as well as to not set it as NetVM for any other VM to prevent it from being autostart.