Whonix live mode / amnesia / amnesic / non-persistent / anti-forensics

Please critically review: grub-live: Boot existing Host Operating System or VM into Live Mode

Could you test please for anti-forensics as per Frequently Asked Questions - Whonix ™ FAQ?

Tails documentation comments that swap may be the biggest threat to anti-forensics on Linux when running in a VM:

https://tails.boum.org/doc/advanced_topics/virtualization/index.en.html#security


Choices for dealing with swap:

Linux does use swapping despite having apparent “free” memory. The kernel tends to swap out long-inactive and memory -consuming processes. This frees up RAM for caches, thus improves responsiveness.

vm.swappiness = 0 does not completely prevent swapping.

Turning off for the whole system is not recommended because hitting the hard limit might cause a system crash. However it may be worth it for some usecases. It can be done by running sudo swapoff -a and rebooting.

An alternative libvirt only solution is to set guest memory pages to being locked:

<memoryBacking><locked/></memoryBacking>

When set and supported by the hypervisor, memory pages belonging to the domain will be locked in host’s memory and the host will not be allowed to swap them out, which might be required for some workloads such as real-time. For QEMU/KVM guests, the memory used by the QEMU process itself will be locked too: unlike guest memory, this is an amount libvirt has no way of figuring out in advance, so it has to remove the limit on locked memory altogether. Thus, enabling this option opens up to a potential security risk: the host will be unable to reclaim the locked memory back from the guest when it’s running out of memory, which means a malicious guest allocating large amounts of locked memory could cause a denial-of-service attack on the host. Because of this, using this option is discouraged unless your workload demands it; even then, it’s highly recommended to set a hard_limit (see memory tuning) on memory allocation suitable for the specific environment at the same time to mitigate the risks described above.

Which of these techniques should be recommended on the wiki?


Offtopic:

Should we recommend users run the live package on the GW too? I imagine there shouldn’t be any practical dangers after a guard node is set. However things like cached DNS requests or whatever else should be gone safely.

1 Like

Where does ^ researched info go on the wiki? Let’s decide on that after you agree on what the best solution is.

1 Like

Quote Tails - Design: specification and implementation

Host system disks and partitions

Tails takes care not to use any filesystem that might exist on the host machine hard drive, unless explicitly told to do so by the user. The Debian Live persistence feature is disabled by passing nopersistence over the kernel command line to live-boot.

Added to grub-live - boot an existing Host OS or VM into Live Mode

Should we also set kernel parameter nopersistence? It’s a live-boot feature, not a linux kernel or grub feature. grub-live depends on liveboot too but I don’t know if it makes a different for grub-live.

I think that setting is non-persistent, i.e. lost after reboot. Affects current session only.

Disabling swap permanently may involve changing some config file, perhaps /etc/fstab, might require a systemd unit file drop-in.

In case of grub-live on the host: Host operating system specific.

Also sudo apt-get purge swap-file-creator (if someone installed it or still having it installed after [multiple] release upgrade).

Don’t know yet.

grub-live on host: When we use grub-live currently - is swap even in use or not?

Whonix with grub-live: Is there even a swap partition or swap file by default in Non-Qubes-Whonix 15?

Possible internet service provider fingerprinting risk for such users:
Depends on how Tor guard changing process works. Tor might connect with old entry guard (supposed to be changed) since its previous change was not stored on the disk. Depends how that is implemented and if there are any bugs.

  • Tor starts → notices any entry guard change is due → changes entry guard → connects: great
  • Tor starts → connects with old entry guard (bug) → notices any entry guard change is due → changes entry guard → connects: fingerprintable

Internet service provider fingerprinting for such users:
Tor consensus (since non-persistent) gets downloaded different (more often on reboot) compared to users who use persistence.

Related:

^ in our case it wouldn’t be “completely non-persistent” but still different (caching an old version and then redownloading each time booted in live mode).

Related:
Anonymity Operating System Comparison - Whonix vs Tails vs Tor Browser Bundle

Tails disables removable drives auto-mounting.

Quote Tails - Design: specification and implementation

Removable drives auto-mounting is disabled in Tails 0.7 and newer.

https://git-tails.immerda.ch/tails/plain/config/chroot_local-includes/etc/dconf/db/local.d/00_Tails_defaults

https://git-tails.immerda.ch/tails/plain/config/chroot_local-includes/etc/dconf/db/local.d/00_Tails_defaults contains config for GNOME only, which is OK in context of Tails since Tails’ default desktop is GNOME and others are unsupported.

Added to grub-live - boot an existing Host OS or VM into Live Mode

I wonder if grub-live should implement disables removable drives auto-mounting too? Either:

  • By default, if grub-live is installed: might be easy to implement, though XFCE only. Supporting this for all kinds of host desktop environments and package selections might be difficult.
  • Same as above, but only when using live boot option in grub boot menu: might be more difficult to implement. (Some code, if live boot, drop disable auto mount config snippet; if persistent boot, delete auto mount config snippet.)

I am looking at Tails design document and trying to figure out what Tails does in order to implement amnesia / non-persistence / anti-forensics so these features can be implemented in grub-live (or where appropriate) too.

This is what I found:


  1. Debian live based

  1. Host system disks and partitions

Tails takes care not to use any filesystem that might exist on the host machine hard drive, unless explicitly told to do so by the user. The Debian Live persistence feature is disabled by passing nopersistence over the kernel command line to live-boot.


  1. Filesystems stored on removable devices

Removable drives auto-mounting is disabled in Tails 0.7 and newer.


4): wiperam

Host system RAM

In order to protect against memory recovery such as cold boot attack, most of the system RAM is overwritten when Tails is being shutdown or when the boot medium is physically removed. Also, memory allocated to processes is erased upon process termination.

(related: Is RAM Wipe possible inside Whonix? Cold Boot Attack Defense)


  1. swap

Host system swap

Tails takes care not to use any swap filesystem that might exist on the host machine hard drive. Most of this is done at build time: the /sbin/swapon binary is replaced by a fake no-op script, and live-boot’s swapon option is not set.


Did I miss any Tails amnesia features?

Could anyone please double check Tails - Design: specification and implementation and/or research further on how Tails implements non-persistence?

More likely than not cannot hurt until we have a good reason to remove it.

Indeed. My solution was to include it in rc.local instead of messing with the fstab, but that’s just a simple hackish workaround not something suitable for a Whonix Host. Though for someone wanting to run a Whonix live VM it is doable and necessary to be certain no traces are left.

Haven’t tried it, does it disable swap when running?

What concerns me at the moment is running on a default Linux host that is non-live to be as safe as possible for a Whonix-live VM.

Nothing turns up under cat /proc/swaps

You’re right such risks exist. It is already the case for people following the 1Guard/App advice anyhow or those who use snapshots to avoid storing state between different WS VM trust levels. However I think the good still exceeds the fingerprinting disadvantages. Int that case - leaking destinations/DNS caced info about visited locations which could be just as bad as storing the state of the site/webpages itself on the disk in the WS.

1 Like

Seems necessary to prevent accidental use of host file systems when booted from a stick.


Done for security against malicious device rather than for anti-forensics

2.7.3 Mounting of filesystems stored on removable devices

Some attacks recently put under the spotlights exploit vulnerabilities in the desktop software stack that triggers automatic mounting, display and files preview of filesystems stored on removable devices.

Looks like you got them all.

Two features unrelated to live system that seem interesting and worth looking into:

2.6.3.7 “Virtual” input system

2.7.2 (Disabling) HTTP keepalive

1 Like

New section added using yesterday’s info:

1 Like

I don’t think it’s a good solution at all. First during boot swap file get used and then at a non-guaranteed time, swap will be disabled. By that time, encryption keys could have already leaked into it.

Yet to be checked.

Then we need to point out these fingerprinting disadvantages and let users decide.

Perhaps the perfect use (accepting fingerprinting disadvantages) would be: shut down all Whonix-Workstations, boot Whonix-Gateway into persistent mode (let Tor do its thing, update Tor state files (entry guards, Tor consensus) and then reboot Whonix-Gateway into live mode. Only then start Whonix-Workstations. To be repeated how often?

New section added using yesterday’s info:

Partially doesn’t belong there.

OK then let’s figure out the proper way to do this even if it is a little longer and I’ll add it instead.

I think the only advantage of mitigating forensic leaks on a persistent host instead of just using grub-live is in a scenario where a user wants selective amnesia for some VMs but not all. Theoretically this can be addressed in grub-live by allowing the user to select which arbitrary directories and files they want to exempt. Which they can then choose the qcow2 of other VMs to keep their state.

In fact if this is easily doable I’d rather just point people to grub-live instead of all the precautions needed to protect them on a persistent host.

Fair enough, but I think even the workaround is not enough since a user cannot predict exactly when Tor needs to renew its guard. It would increase complexity without guaranteed results.


Besides swap there is the problem of disabling process memory dumping to disk.

A user has to go out of their way to configure the kernel to do this with kdump-tools on Debian:

However systemd seems to do this for all userspace processes and needs this to be explicitly disabled in its own config files because it ignores sysctl options:

https://wiki.archlinux.org/index.php/Core_dump


@Algernon since you maintain

@Algernon & @Patrick Do you think it is valuable to document this at all?

HulaHoop via Whonix Forum:

I think the only advantage of mitigating forensic leaks on a persistent host instead of just using grub-live is in a scenario where a user wants selective amnesia for some VMs but not all.

In fact if this is easily doable I’d rather just point people to
grub-live instead of all the precautions needed to protect them on a
persistent host.

@Patrick Do you think it is valuable to document this at all?

Yes.

grub-live isn’t a set and forget just yet. Swap among other things are
not yet researched.

That research related to “selective amnesia for some VMs but not all”
might be useful anyhow.

  • We might end up having to implement these things (such as disabling
    swap) in grub-live or other packages as appropriate.
  • Also “not using the disk” may translate to “requires less RAM, less
    likely to run out of RAM”.

Previous research by you on Encrypted VM Images
seems helpful here too. Perhaps we do have to disable core dumps or
something. Perhaps these would leak to the disk. It depends on their
storage locations (not yet researched) and if these folders are covered
by grub-live.

Theoretically this can be addressed in grub-live by allowing the user to select which arbitrary directories and files they want to exempt. Which they can then choose the qcow2 of other VMs to keep their state.

In fact if this is easily doable […]

grub-live is a very simple (as in very few lines of code) implementation.

(I don’t indicate it was simple to invent. It took years until a
contributor, namely Algernon, stepped up and implemented it.)

In essence only this file

  • update-grub.

No idea if selective persistence of specific folders (even if booting
into live mode) will be a future feature. First of all, we have to
research how to make (part of) the disk writeable after booting into
live mode. May be possible to remount (some folders) as read/write.

Fair enough, but I think even the workaround is not enough since a user cannot predict exactly when Tor needs to renew its guard. It would increase complexity without guaranteed results.

Results are guaranteed just the entry guard change happens a day later
than planned. Better than never at all.


Besides swap there is the problem of disabling process memory dumping to disk.

A user has to go out of their way to configure the kernel to do this with kdump-tools on Debian:

Installing and Configuring KDump on Debian Jessie | www.bentasker.co.uk

However systemd seems to do this for all userspace processes and needs this to be explicitly disabled in its own config files because it ignores sysctl options:

Core dump - ArchWiki

Good points.

My hope that hosts booted into persistent mode and selected VMs into
live mode will end up in with amnesic VMs is rather low. Specifically if
we try to create a package for all Debian users to gather all of these
configs. There are too many kernel options, init systems, packages,
application specific dumps and whatnot to claim having that all under
control.

Much more realistically is to boot into live mode and then perhaps
optionally let users selectively configure a few persistent folders.

grub-live may never be the perfect amnesic solution for all of Debian
users: too many init systems, non-grub boot loaders, many more packages
than we can know.

The only tested-by-us-sometimes amnesic system might be a Debian
derivative that was build by us (because then we have bootloader, init
system, package selection under control and it’s not “do whatever you
want such as change of bootloader”, what is supported and unsupported
has to be defined).

1 Like

Another related Tails feature: emergency shutdown on USB removal.

(added to grub-live: Boot existing Host Operating System or VM into Live Mode)

Sure.

It is recommended to use write protection at the hypervisor level (or host) so you can’t make specific folders on those disks writable.
You could maybe use different partitions and mount some ro and others rw but this would complicate the whole setup and still the disk would need to be writable in general.
imho you can only do this either via some shared folders or attaching some other writable disk.

2 Likes

From grub-live comparison wiki:

protects against malware persistence on hard drive after malware compromise

Hm. Even with Tails, advanced malware can still flash the underlying firmware of the machine and persist. Perhaps the iso written to a ro disc will be clean (if it doesn’t figure out how to write itself quietly to it) but the USB as a medium is susceptible to malware modification.

Somewhat related:
Assuming no hypervisor breaks and adequate anti-forensics on the host, can a malware disable grub-live in the VM silently during a session to leave traces? What about ro-mode-init (since it allows turning a VM image read-only from outside the VM)?