I’d like to continue our discussion of VeraCrypt ‘deniable’ encryption for Whonix users in this new dedicated thread, instead of my other one.
To summarize, the threat model under which I argue VeraCrypt’s deniable encryption can effectively protect the user is the one whereby most western governments now have key disclosure laws that can legally force anyone in those jurisdictions to share their private data, if it’s technically provable that the data is indeed meaningful (i.e. non-random) encrypted data.
Under this threat model, people in these jurisdictions no longer have the technical ability for basic privacy of the data contained on their device’s OS hard drives, such as when they enter such countries at airports, unless they use a securely amnesic OS like Tails and the data is stored not on the Tails disk (because they only offer non-deniable encryption of the persistent storage at this time), but instead on a separate and deniably encrypted volume created in a manner described below.
The current Tails solution is not convenient for persistent (normal) computing, so I consider Whonix as a great central project to address this (with its persistent VM design of large virtual disk files which can themselves be placed inside a VeraCrypt volume).
I will define this threat model to include the following provisos:
The legal doctrine of ‘innocent until proven guilty’ is honored (meaning that no ‘thought policing’ takes place).
The enemy never applies or threatens direct physical harm as a consequence of any aspect of the user’s usage of VeraCrypt (no torture).
The data is never physically accessed by the enemy while the device is turned on and the data unlocked.
The data is not betrayed by forensic evidence of its existence such as plausible logs or other user data in a non-amnesic host OS (all of which can be forcibly decrypted under the same threat model).
Physically intrusive forms of targeted surveillance like physical computer implants or cameras/microphones/sensors nearby do not take place.
(This list may expand with other examples as necessary.)
In the above threat model, an effective VeraCrypt volume would have two following characteristics (or layers):
The volume is in the form of a partitionless whole disk, or a whole disk partition. When a whole disk or whole partition contains headerless, signatureless VeraCrypt data, as I currently understand, it is a. technically indistinguishable from meaningless random data, and b. plausibly explainable as meaningless random data such as being a partition that was formerly a dual-boot OS and was wiped using a method that wrote random data to the whole partition, or a whole disk that similarly was wiped using such a method, which is very commonly available with tools under every common OS. This first layer allows the user to effectively avoid being compelled by key disclosure laws in the first place.
Make it a ‘hidden volume’ (instead of standard volume). This VeraCrypt option instructs you to first create a decoy ‘outer volume’ with a unique password which only unlocks the outer volume, followed by a ‘hidden volume’ inside it which also has a unique password that can only unlock the hidden volume. “Free space on any VeraCrypt volume is always filled with random data when the volume is created and no part of the (dismounted) hidden volume can be distinguished from random data”, making the hidden volume technically deniable even if the outer volume was unlocked. As such, you can provide the outer volume’s key if you are forced to, and still keep your data private. This second layer allows the user to satisfy key disclosure laws even if evidence proves their disk or partition isn’t random.
Under this scenario:
The enemy can’t proceed further than the second layer of VeraCrypt protection, and it is very unlikely for them to break through the first layer in the first place.
The enemy is free to operate as they currently do, but they just won’t find anything meaningful on your hard drive.
You cannot be forced to show something that the enemy has no proof of existing. Without proof it is taken that it does not exist.
Solutions to therefore avoid the creation of plausible evidence of your data existing are paramount.
We still have a lot to learn and document. Here are some links for people to study (including myself):
It seems like it’s early days here at Whonix with the idea of encrypting your Whonix VMs for user protection under various threat models. I’m not pretending to be an expert myself, so we all have lots to learn and research.
What I will say is that many users in the Tor world are motivated to create their own VeraCrypt volumes using the existing official VC GUI program. I can see that Tails VC volume unlocking is massively faster than VC’s own unlocking though, which is good and interesting.
Can you link me to that? I couldn’t find it. You must be referring only to zulucrypt’s implementation of VC encryption, as all of VC’s official offerings are of 256-bit keys.
@Patrick: I have now looked at TrueCrypt's Plausible Deniability (Hidden Volumes) is Theoretically Useless and I see it’s talking about the threat model of physical torture. For me and many other Tor users I interface with, this is NOT a threat model that applies to us. If we need to mention it as a warning, then OK, let’s do that and I agree it’s important - we have a lot of threat models and people to think and care about at Whonix. But it’s not the particular threat model that I’m saying VC deniability is good (and in fact critically needed) for.
OK, understood. But you’re referring once more to the harm of physical torture, right?
We don’t have an amnesic host operating system yet.
I’ve explained my position on Tails-Whonix in its dedicated thread, so that existence wouldn’t help it either.
I do not recognize this being a popular request at this stage. Came up almost never previously but that could also be related to the non-existence of Whonix host operating system.
Therefore:
This is rather theoretic at this stage (absence of host operating system, amnesia)
A wiki page / discussion is welcome.
Please don’t expect me to write code for this or being able to convince me that this is a priority.
I don’t want to sound harsh but please stop dressing up your request as one that all 2 million users of Tor are interested in. If you are able to provide the code with the desired level of UX as well as stability, we’ll have no problem including it. Better yet, work with upstream to spread this feature to all Linux users. Short of that you can add instructions to our wiki to document this.
Give zulucrypt a spin and you’ll see. VeraCrypt is in Debian? Is it able to handle operations on boot like LUKS?
The LUKS nuke patch has been mainlined upstream however it is unlikely the user will be able to have a chance to destroy their data (destroying evidence is a crime in many jurisdictions once they’ve been arrested). Drives are cloned before any attempt is made to interact with them for this reason.
Understood, however it’s user’s choice what’s worse: consequences of being accused of destroying evidence / damaging the investigation or the charges resulting from the evidence itself (to himself and to others).
Probably different in various jurisdictions.
Also, I can think of many cases where the user is mainly a suspect of X only, while the evidence, if revealed, will also implicate him also in the more serious Y, Z etc.
Indeed useless if that’s the case.
What about requiring a remote key in addition somehow?
No relation to evidence. A wipe passphrase could be quite handy. Imagine
you want to travel or sell hardware and have (online) backups. In that
case a wipe passphrase is more handy than booting dban or so. Just a
usability feature. A zero’d device might lead to fewer questions in many
situations.