Whonix live mode / amnesia / amnesic / non-persistent / anti-forensics

Agreed this is a drawback, if you know how to submit a patch it would be awesome.

Awesome.
here ?: Live Mode for Kicksecure

and/or on the /Tor page (mention, link)?

i will look into it. i am not familiar enough with the building source code to know where this would go. correct me if my assumptions are wrong. i’m thinking about this in regards to vitrualbox and kvm. steps required:

  1. create additional drive for virtualbox or kvm with consistent uuid.
  2. modify /etc/fstab to contain

/dev/disk/by-uuid/[persistent uuid] /var/lib/tor auto defaults,errors=remount-ro 0 2

  1. ensure /var/lib/tor is still owned by debian-tor:debian-tor

vbox: add here https://github.com/Whonix/Whonix/blob/master/build-steps.d/2600_create-vbox-vm

add here: https://github.com/Whonix/whonix-libvirt/tree/master/usr/share/whonix-libvirt/xml

Use a script, command line tools to create a partition table, format disk at first boot. Started by a systemd unit file.
add here: GitHub - Kicksecure/grub-live: optional grub live boot menu entry as second option https://www.kicksecure.com/wiki/Grub-live

Don’t. Because … not clean, no API, no /etc/fstab.d (search term) yet. Mount using systemd unit file instead.

Also using systemd unit file.

Turns out we had it all along:

https://www.whonix.org/wiki/Dev/Multiple_Whonix-Workstation#cite_note-1

Searched wiki for “dns caching”

thank you. the disk creation steps for virtualbox and kvm are easy enough.

do i want to add this with grub-live? grub will not be booting from the new secondary disk. i think i need to find different code.

Grub won’t boot from second disk indeed but selective persistence fits into grub-live package since it’s related feature? Scripts / systemd unit files could be added there?

This could even be considered as default feature if good enough.

Drop-in snippets in a .d folder could define which files/folders are selective persistence.

(The config snippet to enable /var/lib/tor folder would belong into package anon-gw-anonymizer-config package.)

1 Like

Awesome. But is it related. Is cached in RAM only or DNS or something important cached in /var/lib/tor folder?

This quote from Whonix Live wiki page indicates so

This should eliminate any Tor-related, cached data like DNS requests that could leave traces about web activity.

That statement could still use a reference.

thank you. i’ll explore and let you know my progress. as soon as i get comfortable with the various whonix files lay out, i do not believe this will be too hard. and i am probably going to regret saying that. :wink:

1 Like

Unclear anywhere how this is implemented exactly, but it’s safer to use a blanket amnesic state instead of playing whack a mole with whatever Tor stateinfo (besides DNS) is being cached and where.

1 Like

Not too hard to research. Put /var/lib/tor under git version control. Or copy folder. Use/reboot a while. Diff.

Maybe written state isn’t “much”. Disk avoidance is a goal of the Tor Browser Bundle (TBB). There’s been a technical paper on TBB disk avoidance violations.

1 Like

https://www.researchgate.net/publication/332004753_Forensic_Analysis_of_Tor_Browser_A_Case_Study_for_Privacy_and_Anonymity_on_the_Web

This is a problem because? How does it affect the fingerprint, where does the attacker/fingerprinter sit, what does he do?

Naturally, if you include persistence you add persistence for malware too. This not only affects the tor process but also possible lower level bugs for filesystems and the likes.
Persistent overlays are a thing and possible with the debian integrated $livestuff but at least for Tails this did break once in a while (maybe not that of a problem if you just use it for Tor). I’m not sure if out of the box this could be used for just one program.
As an alternative you could just add another live mode indicator which after 3 months or so will spit out a red warning so the user reboots in persistence mode.

2 Likes

There is no DNS cached in folder /var/lib/tor/.

ls -la /var/lib/tor/
drwx--S---  5 debian-tor debian-tor    4096 Dec 17 10:33 .
drwxr-xr-x 42 root       root          4096 Dec 12 12:27 ..
-rw-------  1 debian-tor debian-tor   20442 Dec 12 12:06 cached-certs
-rw-------  1 debian-tor debian-tor 2081457 Dec 17 10:27 cached-microdesc-consensus
-rw-------  1 debian-tor debian-tor 3734938 Dec 12 12:05 cached-microdescs
-rw-------  1 debian-tor debian-tor 1295936 Dec 17 10:28 cached-microdescs.new
-rw-------  1 debian-tor debian-tor       0 Dec 17 10:27 lock
-rw-------  1 debian-tor debian-tor   13394 Dec 17 10:33 state

From an anti forensics point of view this leaks times when Tor was used.
Even if we cleared the file access times, it would likely be possible to deduct times when Tor was run from files in that folder (cached-microdescs…).


When Tor parses folder /var/lib/tor/ malware would need to specifically craft a file there to exploit a hypothetical vulnerability in Tor’s /var/lib/tor/ parsing code.

Good point.

Reference would be good.

This seems hard to time. We’d still miss the exact time when it’s time for Tor to change entry guards. Each time after Tor thinks it is time to change entry guards the system boots, Tor will pick random entry guard.

[1] We would need some method to ask Tor “is it time to cycle Tor entry guards” or other mechanism to detect that. And if the “answer” is “yes”, in such cases, do not start Tor and show and systray, popup and/or whonixcheck and inform about this.

Selective persistence does not seem to be the answer to implement persistent Tor entry guards in live mode. [1] would be better.

[1] would result in using persistent Tor entry guards. This would play well with Tor’s regular schedule To cycle Tor entry guards. It would not have any disadvantages related to malware vs live mode. It however would still be fingerprintable at the internet service provider (ISP) level because such clients would download microdescriptors more often than clients who always booted into persistent mode since these would not be cached on the disk.


Related (not to selective persistence but live mode generally):
Restrict Hardware Information to Root - Testers Wanted! - #14 by Patrick

perhaps a new custom script is in order? could do a fairly “dumb” one based off of a “guardrotate” style file.

  1. first time gateway boot and and tor configuration/connection, create an empty “guardrotate” file.

  2. set script to run pre-tor connectivity going forward to check for presence of guardrotate file and creation date.
    2a. if date is too old, don’t start tor and inform user. prompt user if they want to start tor anyways, restart in persistent mode, erase state file, whatever. if new guard mode is selected at this point, create new “guardrotate” file.
    2b. if date is fresh, continue as usual silently.

a somewhat different version could probably check the state file for which guards have an idx number on creation, log those and then promp the user when what is contained in the state file differs from what is logged. main issue here will be that a new guard node will be initially selected in live-mode, with a different one being picked again in subsequent persistent boot.

1 Like

It cannot be based on file creation time. That is too inaccurate. Otherwise, this still applies:

It needs to parse the same file that Tor is parsing. Probably /var/lib/tor/state. Ideally it would use the same code as Tor is using. Because when Tor Project changes the guard rotation, we might miss it and the script might not be updated in time. But reusing the same code as Tor is using this may not be feasible. We could submit a feature request against Tor Project to add command line command to /usr/bin/tor to add command to only output “time for guard cycle” vs “not time for guard cycle”.

But even if we had this. Still not perfect. Let’s say the script said “not time yet to change entry guard” and assume that judgement would be correct at the time (one or two days too early). Then assume Whonix-Gateway live mode keeps running. Then Tor would cycle to another random entry guard when time has come. Once the user reboots, this entry guard selection by Tor would be lost due to live mode. Then the script would notice that and recommend to reboot into persistent mode. Then yet another entry guard would be chosen. The might however still reduce from many superfluous entry guard cycles to a single superfluous entry guard cycle.

1 Like

a one time superfluous entry guard cycle is probably preferable to an unknown potentially regular number of guard entry cycles. i like the idea of tor incorporating a feature on this where a notification for a guard cycle is given. could address both the real time use and “on boot” use.

1 Like

I can’t find what my guard (just 1, right?) in /var/lib/tor/state is.

There a lot Guards listed in /var/lib/tor/state. These are probably just potential entry guards. I also found the fingerprint of 1 entry guard that I am using. It is easily visible in nyx. The entry of a “potential guard” vs “my guard” does not look different from other “potential guards”.

But I cannot figure out what marks my guard in /var/lib/tor/state as “my guard”. Perhaps that is stored in a different location that I don’t know. Before finding that out, there is no way to parse/automate anything.

1 Like

This info is gathered by querying Tor optionally via the python stem API:

https://stem.torproject.org/api/descriptor/router_status_entry.html

1 Like

it appears that the 3 guards set as your defaults will contain an additional “confirmed_idx” variable in the state file.

confirmed_idx=0
confirmed_idx=1
confirmed_idx=2

CONFIRMED_GUARDS is a variable thrown about in various tor related discussions. for example:

1 Like