[HOME] [DOWNLOAD] [DOCS] [NEWS] [SUPPORT] [TIPS] [ISSUES] [DONATE]

Whonix live mode / amnesia / amnesic / non-persistent / anti-forensics

Depends on what you want. Persistence has always the risk that some malware makes use of it or may make you more fingerprintable (though probably not directly through the browser) but most user want to save data (I guess) …

2 Likes

Anti-Forensics Precautions

Introduction

At the moment there is only one advantage of this configuration compared to running grub-live on the host – achieving selective amnesia for some virtual machines (VMs) while others remain persistent. This may not be necessary in the future if grub-live development continues to advance and it allows for selective exemption of host directories. This section is a work in progress and not exhaustive.

“This may not be necessary in the future if grub-live development continues to advance and it allows for selective exemption of host directories.”

I don’t think this is being worked at? Hence removed.

Split / simplified documentation.

1 Like

A post was split to a new topic: remove GNU/Linux string from boot grub menu

change

Whonix Live-mode GNU/Linux

to

LIVE mode GNU/Linux

This is for better usability in preparation for multiple boot modes for better security: persistent + root | persistent + noroot | live + root | live + noroot.

related:

so, the issue has come up regarding various potential fingerprinting issues involving the gateway using live mode. this is due to a couple things:

  1. guard rotation when guard expires. in a live mode, user may not catch a new consistent node until a dist-upgrade is applied and kept permanent.

  2. extra downloads of microdescs and such.

so, an easy work around might be enabling a second storage disk for permanent storage. as the current live config works, it only affects /dev/sda1 or /dev/vda1, which is good. a secondary disk could be included with the installer packges, which is not set to read only. then, /var/lib/tor could be symlinked to a directory on /dev/sdb1 or /dev/vdb1. this would allow for all of the binaries on the gateway to be protected from malware by the live mode, while allowing tor to write updates to the guard nodes and microdescs on the persistent drive seamlessly. if i’m missing a directory, please let me know. i’ll likely include this as custom instructions in the guide i work on. but, it might make sense trying to incorporate this into the whonix image builds.

With enhancements like tirdad we will stand out on the network but its worth it. However the guard pinning is important.

Interesting. Selective persistence. Patches welcome.

However, I am wondering if we’re “halfway” reducing the gain we get from live mode by doing that.

Quote https://www.whonix.org/wiki/Whonix_Live#grub-live_on_Whonix-Gateway_.E2.84.A2

This should eliminate any Tor-related, cached data like DNS requests that could leave traces about web activity.

It was also something about cached DNS?

Can you find the reference for that? @HulaHoop I can’t see it in the wiki. I remember you asked on the tor-talk mailing list (or tor-dev?) and then someone explained that.

2 Likes

i don’t believe the /var/lib/tor/ directory contains any cached dns requests. it contains the concensus/microdescriptor files, the state file, the lock file, and a keys directory. the problems listed for running the gateway in live mode were:

  1. Tor starts -> connects with old entry guard (bug) -> notices any entry guard change is due -> changes entry guard -> connects: fingerprintable

in the current default configuration, once a guard node has naturally expired, a user in “live mode” will arguably get a new entry guard with every boot until such time that they are inclined to use persistent mode for the purpose of applying an os update. it will require manual vigilance for a user to watch their guard node life span if they want to prevent the above scenario.

if a small persistent drive is mounted to /var/lib/tor, the above problem is avoidable because changes to the guard node will survive a reboot and will not require any manual vigilance of the state file.

  1. Tor consensus (since non-persistent) gets downloaded different (more often on reboot) compared to users who use persistence.

mounting /var/lib/tor to a persistent drive also addresses this issue, since the concensus and microdescriptor files are stored in the same directory and will also survive a reboot, thus keeping them current with arguably no distinction from any other tor user in this regard.

unless i am misunderstanding something, this should not affect the flushing of dns related queries that involve personal networking use on a reboot in live mode. absent an exploit that involves persistence in the /var/lib/tor/ directory, i do not believe this reduces the gains of using live mode. it arguably enhances it.

Here it is:
https://lists.torproject.org/pipermail/tor-dev/2016-November/011636.html

Where do I add it?

Not worth the hassle of implementing when ou can boot initially into persistent mode then use live mode from there.

the scenario discussed is this.

  1. user installs whonix gateway and boots for first time in persistence mode.
  2. user boots into live mode going forward. user only boots in persistence mode when os updates are available. this keeps the vm in an arguably cleaner state, assuming an exploit isn’t introduced during an os update.
  3. user continues to boot in live mode. however, the ttl of the entry/guard node has expired. as a result, new entry/guard nodes are selected on multiple boots for an unknown length of time.
  4. an os upgrade is required. user boots into persistent mode and applies the updates and a new entry guard is selected which will be used going forward in live mode.

enabling a means of persistence by mounting /var/lib/tor on an additional writable virtual drive prevents the possible fingerprinting issues that come with point 3 above. it also addresses the potential fingerprinting issues involving the extra downloading of descriptors. all of this happens without any need for manual user intervention selectively choosing to boot in persistence mode.

it’s possible that this doesn’t fix a big problem based on the frequency with which debian updates occur, assuming that a user is regularly updating their vm images when updates are available. thus, if this creates a problem with initial builds for distribution,it may not be worth building into it. if that’s the case, i could probably cobble together some instructions for the wiki as optional post installation steps if anyone thinks its worthwhile and this isnot something that would create other problems that we haven’t considered yet.

Agreed this is a drawback, if you know how to submit a patch it would be awesome.

Awesome.
here ?: https://www.whonix.org/wiki/Whonix_Live#grub-live_on_Whonix-Gateway_.E2.84.A2

and/or on the /Tor page (mention, link)?

i will look into it. i am not familiar enough with the building source code to know where this would go. correct me if my assumptions are wrong. i’m thinking about this in regards to vitrualbox and kvm. steps required:

  1. create additional drive for virtualbox or kvm with consistent uuid.
  2. modify /etc/fstab to contain

/dev/disk/by-uuid/[persistent uuid] /var/lib/tor auto defaults,errors=remount-ro 0 2

  1. ensure /var/lib/tor is still owned by debian-tor:debian-tor

vbox: add here https://github.com/Whonix/Whonix/blob/master/build-steps.d/2600_create-vbox-vm

add here: https://github.com/Whonix/whonix-libvirt/tree/master/usr/share/whonix-libvirt/xml

Use a script, command line tools to create a partition table, format disk at first boot. Started by a systemd unit file.
add here: https://github.com/Whonix/grub-live

Don’t. Because … not clean, no API, no /etc/fstab.d (search term) yet. Mount using systemd unit file instead.

Also using systemd unit file.

Turns out we had it all along:

https://www.whonix.org/wiki/Dev/Multiple_Whonix-Workstation#cite_note-1

Searched wiki for “dns caching”

thank you. the disk creation steps for virtualbox and kvm are easy enough.

do i want to add this with grub-live? grub will not be booting from the new secondary disk. i think i need to find different code.

[Imprint] [Privacy Policy] [Cookie Policy] [Terms of Use] [E-Sign Consent] [DMCA] [Investors] [Priority Support] [Professional Support]