There are already guides to get it working with UTM if you want it for personal use.
VBox has the advantage that @Patrick can distribute the builds in a sane way without having a mac machine or an arm machine. He can use emulation to build the raw builds, then use his linux machine to convert to ova and configure the settings on the vbox file.
Then we can have easy macos whonix for the masses.
Less operational complexity, testing, tech debt, and attack surface. This is a fair trade for the annoying humps.
If your goal is to get things working for yourself, use UTM if it seems easier. But if your goal is to help us get an official distributed apple silicon version, we need to do vbox
Wrt Ctrl-C⦠I was being loose with my language. I was hoping that I could āpauseā the build by killing it, starting it again, and it woudl pick up where it left off. Not being able to do that makes for a pretty slow dev cycle⦠do you really rebuild everything from scratch every time you change a line?
No. You only need to successfully run derivative-maker to generate the raw builds once. Then just copy the raw files and experiment with those copies.
The raw builds are not the crux of the remaining work. The crux is converting the raw images in to working vbox images
You really dont actually even need to change derivative-maker code. You need to figure out all the VBoxManage commands to get raw builds working on your mac (or do it via UI and document)
FWIW I was able to build Whonix-Gateway-CLI-17.3.6.2.arm64.raw on a minimal bookworm installation with a 14GB disk and have 8.9G left over. Working on a XFCE workstation next.
Iāve got two raw images built (Whonix-Gateway-CLI-17.3.6.2.arm64.raw, Whonix-Workstation-Xfce-17.3.6.2.arm64.raw). Iāll post details later but right now Iāve got an error I donāt understand.
The gateway builds fine. At the end of building the workstation (derivative_maker > 5200_prepare-release > /usr/bin/dm-prepare-release), after the image already exists), the build āfailsā because it canāt find the gateway image. I donāt know how it would even know if it was already built or what variant was used but it does this test:
test -f /home/builder/derivative-binary/17.3.6.2/Whonix-Gateway-Xfce-17.3.6.2.arm64.raw
This fails for two reasons. Fiirst, I moved the image out of the way. Second, I built the CLI version because I donāt anticipate spending any time in the gateway GUI.
It doesnāt know that. Just assumes that itās built in correct order. Butā¦
Probably doesnāt matter at this stage. Only relevant in context of redistribution to third-parties, see also dm-prepare-release. Can be safely ignored when simply building a raw image that doesnāt need automatic creation of ova or digital signatures.
Prepare release assumes gateway cli + workstation cli. Or gateway xfce + workstation xfce. A mix of cli + xfce is unsupported. This is in context of unified ova creation (gateway + workstation inside a single ova).
Itās a step-based build system. Meaning, no need to always re-run the full process.
See:
Thereās also --dry-run true which doesnāt create an image, just a text file. Useful to debug prepare release step and fast iteration.
Thank you for the clarification. Just so I understand the mechanisms better, is there any reason that a CLI/XFCE split isnāt supported? Or is it just to avoid a combinatorial support problem?
Youāre the first one since 2019 when unified ovaās were introduced to attempt a build from source code combining cli + xfce.
Thereās a lot complexity to manage and several goals that arenāt easily combined.
fewer build commands
simpler build commands
unified ova
optimization of build speed, upload time
support building singular VMs (for example build gateway only, not workstation)
multiple target, platform support (VirtualBox, KVM, raw)
creation of hashsum files, torrent files, digital signatures
fully automated builds and uploads
no manual modifications by images by hand by running manual commands, all must be source code based
The prepare-release script does nothing when building the gateway. I designed it to run only when building the workstation because then both VMs can. be combined into a single image.
During the workstation build command, thereās no information on how a previous gateway build was done. (cli vs xfce)
This is the problem:
if [ "${dist_build_type_long}" = "workstation" ]; then
vm_multiple=true
## dist_build_desktop could be KDE, CLI, Xfce, RPi or CUSTOM
vm_names_to_be_exported="Whonix-Gateway-${dist_build_desktop} Whonix-Workstation-${dist_build_desktop}"
fi
Variable dist_build_desktop is either Xfce or CLI. Mixing isnāt supported. And even if supported, might be complicated to document.
I will probably move that code elsewhere to help-step/variables. That will help adding a sanity test to at least fail early at the beginning of the workstation build command instead of the very end of the build process. The vm_names_to_be_exported variable can then even be optionally modified by the builder. This will then allow mixing CLI + Xfce builds.
Wrt complexity of mixing CLI and XFCE: That makes sense.
Wrt moving that check elsewhere: Thatās probably a good separation of concerns but my sense of the people who use this is very few people will try to optimize their installs by pairing CLI and XFCE. I was just trying to keep the build simple because X adds a ton of complexity and I knew i didnāt need it for the gateway.
New topic: I have a set of detailed instructions for how to build the raw images. Theyāre notes for myself and my intention is to add to them as I get things toward running in VirtualBox. Is there somewhere useful for me to put them or should I just keep them on my desktop?
I could mark them as āLoftyGoals notes on building Whonix for M1 that probably wonāt work for you unless youāre a developer and even then might only be just a little useful as hintsā.
Current state (not formatted, edited, doubele checked, or cleaned up so I wouldnāt post them this way):
PICK THINGS
root password
user password
host directory for storing downloads and artifacts
host directory for moving files between hosts and guests (preferably with no spaces)
PREPARE HOST
create a directory for downloads and artifacts
create a directory for moving files between host and guests
derivative-maker/derivative-maker --type vm --target raw --flavor whonix-gateway-cli --arch arm64 --repo true --tb open --vmsize 3G 2>&1 | tee build.gateway.log
derivative-maker/derivative-maker --type vm --target raw --flavor whonix-workstation-xfce --arch arm64 --repo true --tb open --vmsize 4G 2>&1 | tee build.workstation.log
[move Whonix-Gateway-CLI-17.3.6.2.arm64.raw and Whonix-Workstation-Xfce-17.3.6.2.arm64.raw from shared to artifacts]
FAILIURES AND MITIGATIONS
permission denied when running VBoxLinuxAdditions-arm64.run: unmount cdrom and re-mount with sudo
Debian packages donāt download
fix hwclock, nuke all, start over
NOTABLE
Only 3GB is required for whonix-gateway-cli
Only 4GB is required for whonix-workstation-xfce
if Debian packages donāt download and complain about time, you might have forgotten the hwclock line
unless killed at an inopportune time (e.g. when the target image is mounted), this command can completely reset the build: rm -fr build*.log derivative-binary
Iāve got the gateway running and mostly passing system checks. Iām hammering out the details of what the .vbox settings should look like and Iāll post here later. In the meantime, two concerns:
The virtualizer detection failed. I donāt see any details explaining why. How should I proceed?
It looks like the guest tools arenāt installed by default. They definitely improve the user experience, even in CLI mode. Is it reasonable to make that part of the initial image creation?
Thatās the plan. Implementation might be difficult.
VirtualBox Guest Additions are at time of writing installed by default on Kicksecure and Whonix for AMD64.
Installation source at time of writing is:
From Debianās (fasttrack.debian.net ) packages virtualbox-guest-utils , virtualbox-guest-x11 .
Debian doesnāt provide VirtualBox Guest Additions for ARM64 at time of writing in Debian bookworm as far as I know. But that does not matter. There are currently worse issues with Debian for that. The installation source will probably change to virtualbox.org in the future as per:
Can we download macOS / Apple Silicon hosts, verify it using Mac gatekeeper? Can be verified also on Linux? It might be possible to verify the macOSArm64.dmg and extract the virtualbox guest additions ISO from there.
I am 100% aware of this. Your dedication to automated releases is essential to the succes of a system like this. I was not suggesting that this be used for production or any kind of automation. My experience building Whonix, however, was that I could not have done it without a background in software engineering and some handholding from you and @Mycobee. I believe it would be helpful to have notes like that in the forum or on a wiki page. If you disagree I will just keep them for myself.
[gateway user ~]% sudo systemd-detect-vert
none
[gateway user ~]% sudo systemcheck --function check_virtualizer --verbose --debug
https://pastebin.com/VhWNQ3Ga
Yeah, okay, for another day, then. Is there a tracker where I should write this as a FR or a bug?
Thatās how I got both virtual box and the ISO so itās certainly possible. The DMG could be downloaded and there are signatures for it. The ISO is inside of the DMG. I spent about an hour trying to get the ISO out using on apt-available Debian tools and didnāt crack it, though.
Now, even if we had a script to automate VM settings on Mac, I wonder how much good it would do. It would still be messy and require a two-step build process. (Build inside Debian ARM64 VM + run extra script on Mac host.) - Unless everything can be orchestrated from the Mac host, but I guess thatās unlikely to materialize.
The solution is cross building. Soon I have Linux ARM64 hardware (non-Mac). Then I can create Linux ARM64 builds on Linux ARM64.
But the problem will still be VirtualBox VM files and ova creation. Because while there is VirtualBox Mac ARM64, there is no VirtualBox Linux ARM64 at time of writing.
One solution might be to use AMD64 Linux version on ARM64 and qemu-amd64-static vboxmanage ..., if that is even possible. (Might be possible since vboxmanage is rather āsimpleā, itās not the virtualizer.)
A different solution might be to create the VirtualBox settings files and ova using alternative tooling. But that could be error-prone.
Links to Debain specific issue trackers and existing tickets can be found in the linked forum thread.
With the current maintenance situation - at time of writing - with multiple months outdated versions and existing tickets being ignored - of VirtualBox in Debian that seems futile. The only thing that would help is contributing to Debian directly by becoming a Debian Developer.
I created a working Whonix-Gateway-CLI that I built by hand from a combination of the default .vbox settings for a Debian bookworm VM and the commands in 4600_create-vbox-vm. The network is functional although there is a VM detection bug (mentioned above).
I am now doing an analysis of the effects of the VBoxManage commands in 4600_create-vbox-vm on an M1. Iāve found that a number of the commands have no effect (this is probably fine), one crashes, and the resulting VM doesnāt run.
My question is this: what is a ācorrectā 4600_create-vbox-vm? Is it:
start from the .vbox created by 4600_create-vbox-vm and find the fewest possible changes to create a working Whonix VM
start from the default .vbox created by VirtualBox for Debian bookworm and find the fewest possible changes to create a working Whonix VM
analyze each XML key-value pair and chose the ābestā value for an M1 (according to our judgement)
analyze each XML key-value pair and choose the ābestā value for all platforms (according to our judgement) and only deviate if the value prevents the VM from running on an M1
something else
Any of those is fine with me. I just need an approach to choose values and commands.
Only make changes if thereās a strong rationale. Document results why changes were (such as error messages, results, research, hyperlinks, comments) made inside the source code preferably (or forums).
If changes are useful on all architectures, then this should be changed for all architectures. The amount of if/else should be kept to a reasonable minimum.
Once done, comparing default .vbox file for AMD64 with ARM64 also makes sense to check if there are no unexpected changes.
I apologize if Iām being dense but which script?
Iām struggling with this now:
Running even a part of the derivative_maker scripts on OSX is a non-starter.
Running VBoxManage on ARM64 Debian is impossible.
I can orchestrate from the OSX side but that doesnāt do me much good because thereās no good way to use the knowledge in the Debian guest to execute VBoxManage on the host.
Iām really not thrilled with the idea of a second set of scripts to generate the .vbox on OSX. Those will get out of sync fast.
Fundementally VBoxManage needs to run on AMD64 Debian or macOS. The derivative_builder script needs to run on Debian.
Right now I see four options:
re-tool 4600 to produce a script that can be run on macOS or Debian. This would allow macOS-side orchestration to work but you would lose all of the safety work built into the derivative_maker scripts.
have macOS spin up a ARM64 VM to build the raw disk and a AMD64 VM to build .vbox and create the ova
have the ARM64 VM use Qemu to run VBoxManage
get VBoxManage to build on ARM64 and use that directly
The first three are architecturally very different from how the rest of the ova are built. The first two also require macOS to be part of the build chain which seems non-ideal.
The last one (VBoxManage) has an elegance to it but Iām not sure itās possible. If thatās attractive to you I can look into it.
Involving extensive OSX scripting seems not a good way to go indeed.
Yeah. Seems wrong.
But can you run a virtualizer inside qemu-amd64-static? Maybe not but VBoxManage is a regular binary (not requiring VirtualBox kernel modules), not virtualizer. So running it using qemu-amd64-static might work.
Please try: create an Debian AMD64 chroot on ARM64 Debian. Try to run some simple binary (such as hello from Debian -- Package Search Results -- hello) using qemu-amd64-static. Then try to running vboxmanage using qemu-amd64-static.