Oops. Not sure what I was thinking. But I would disagree with my former self now.
Maybe when the installer was new, I thought it’s more important to easily have debugging information prominently visible. But since the installer is very stable, that isn’t needed now.
Ok.
Hm. Upon review, I don’t like using a pipe to point at the public key. I guess that was done to avoid writing to the filesystem? But since we have a folder in the user’s home folder anyhow, it seems OK to write the public key there before signify uses it?
Seems better? For verification, I would trust the more common use case (not using a pipe) more.
Piping a public key to a file is a very simple process. I don’t see any relevant attack surface
(Excluding an already locally compromised system which wouln’t be a useful threat model because game over anyhow.)
Having signify read a public key from a file and then performing a complex process which has relevant attack surface (they file which is currently being verified might be malicious, attempting to exploit signify) is a different thing.
So in result, I’d prefer to have the public key in the local file system and then signify can use that and doesn’t need go through any presumably less popular, less tested, more complex (presumably higher attack surface) code paths involving pipes.
Every command “hangs” on shellscript because it runs one at a time, so if signify hangs, the cat doesn’t matter.
Also no output is different than the signify command hanging. If it doesn’t produce output, it’s signify program upstream fault.
Possible but then it does what you said, shows output without hiding.
Needed only to get the right order of log messages and die inside check_integrity().
Possible, but it would be made with cat or read, so every command would “hang” according to your first point.
I don’t see the need actually to print log_run anymore, that was useful for debugging in the beginning, it doesn’t protect against anything, just a good to know that not many users will use.
If the verification is successful or not is the important part.
Also, the alternative chosen by you should also be applicable to the checksum verification, so this:
With the previous security bug that I hot fixed just now, I think more eyes on the verification command and their output seems worthwhile. Therefore log_run info seems better?
If the distribution is unsupported: have a dedicated function to error out such as unsupported_distribution_detected (or better name).
- At this time, your Linux distribution is unsupported by the ${guest_pretty} Installer.
- Alternative: Check if manual installation is supported refer to:
${url_version_domain}/wiki/VirtualBox
Debian trixie (testing) support has been added just now.
Feel free to refactor/improve.
During development, I temporarily disabled building other distro suites (Debian stable etc.) for CI builds to save some CI time. Just must not forget to re-enable. (Done.)
How could we allow installation on Debian testing based derivatives (such as kali?)?
Or installation on derivatives generally?
Do you think you could add support for Fedora? Instructions don’t look terribly difficult.
We could get the gpg key using extrepo. (Similar to how the installer already gets the gpg key for the Kicksecure repository)
Line gpgkey=https://www.virtualbox.org/download/oracle_vbox.asc looks insecure.
Qubes Fedora template folder /etc/yum.repo.d folder shows a nicer use.
Please allow Kali host operating systems in the Kicksecure / Whonix Linux Installer for Linux.
related:
The ban on discussing anonymous pentesting does not apply here. I see zero issues with Kicksecure or Whonix being installed on top of Kali. Unless I have forgotten my own argument, in that case please remind me, please allow Kali hosts in the installer.
The issue in above forum thread was that I wanted to avoid Whonix forums morphing into a script kiddy forum where people ask how to anonymize attack tools. That seemed not a fight, risk worth taking on top of Whonix.
A Kicksecure or Whonix VM on top of Kali doesn’t simplify any anonymous attacks because Whonix doesn’t have a feature to anonymize the traffic of the host operating system yet at time of writing and even if it had it still would not help making attack tools work over Tor. These tools would still have broken connectivity for reasons inherit to these tools (which I don’t want to elaborate on).