Whonix on Mac M1 (ARM) - User Support (still unsupported at time of writing)

will Whonix support mac with ARM (m1)?


Initial reply:

Even Mac Intel support is currently more of a lucky coincidence.

Should VirtualBox introduce a feature to run “amd64” (which includes Intel and AMD) on Mac ARM, then as a side effect that would make Whonix VirtualBox work on Mac ARM too.

Should this ever change, it would be documented here:

Should Whonix for Mac M1 get available in the future and officially supported, it would be easily found on the Whonix download page.

Edit / Update:

There was initial development towards Mac on M1 as evidenced in this forum thread. See also:

At time of writing, Support Status is still Unmaintained. Development stalled.

Moderation changes:

Thank you for the answer. Would wait for VirtualBox running on ARM.

I’m trying to get Whonix Workstation and Gateway running using the QEMU patches with the new Mac Virtualization.framework. More details on this here: How to run Windows 10 on ARM or Ubuntu for ARM64 in QEMU on Apple Silicon Mac · GitHub and also already pulled into this app https:// github. com/utmapp/UTM

As a first step, I’m just getting a Debian ARM QEMU VM working so I can build Whonix for ARM. Based on these instructions: Build Documentation: Physical Isolation

Would anyone want to help on this? Would be good to have some people to throw ideas around.

P.S: Sorry for the malformed links, I am not allowed to post links.

1 Like

As an update, I’ve built an ARM .qcow2 file for Whonix-Gateway, using this command:

sudo ./whonix_build --target qcow2 --flavor whonix-gateway-xfce --build --arch arm64 --kernel linux-image-arm64 --headers linux-headers-arm64

from inside a Debian VM.

Then, I’ve tried to run this with QEMU (at least trying to get it to boot, not worrying about network really right now):

qemu-system-aarch64 -L /Applications/UTM.app/Contents/Resources/qemu -S -qmp tcp:,server,nowait -vga none -spice port=5930,addr=,disable-ticketing,image-compression=off,playback-compression=off,streaming-video=off -device virtio-ramfb -cpu cortex-a72 -smp cpus=8,sockets=1,cores=8,threads=1 -machine virt,highmem=off -accel hvf -accel tcg,tb-size=768 -bios /Applications/UTM.app/Contents/Resources/qemu/edk2-aarch64-code.fd -m 3072 -name "Whonix Gateway" -device qemu-xhci -device usb-tablet -device usb-mouse -device usb-kbd -device virtio-blk-pci,drive=drive0,bootindex=0 -drive "if=none,media=disk,id=drive0,file=/Users/gavinpacini/Library/Containers/com.utmapp.UTM/Data/Documents/Whonix Gateway.utm/Images/Whonix-Gateway-XFCE-,cache=writethrough" -device rtl8139,mac=XX:XX:XX:XX:XX:XX,netdev=net0 -netdev user,id=net0 -device virtio-serial -device virtserialport,chardev=vdagent,name=com.redhat.spice.0 -chardev spicevmc,id=vdagent,debug=0,name=vdagent -uuid XXXXXXXX-2837-4F4E-9999-902A56B0C5D1 -rtc base=localtime

But, I cannot get it to boot past the BIOS in QEMU at all. Any ideas? Note, I’m using UTM because it comes with the patched binaries for QEMU on Apple Silicon. It runs normal Debian fine (as I used it for building Whonix).

1 Like

Try start a Debian VM first using qemu-system-aarch64 before you try Whonix? Related to Free Support for Whonix ™

Try to build a Debian VM image first using grml-deboostrap, which Whonix build script is internally using.

Perhaps easier if based on KVM?

There is some libvirt command that can translate (Whonix) KVM xml files into QEMU parameters.

virsh domxml-to-native qemu-argv /path/to/file.xml

For motivation, proof of similar working concept:

  • Kicksecure works on ppc64el (I got a test machine using distro moprhing).
  • Community is running Whonix on POWER9, Raptor Talos II.



  • Read about qemu-debootstrap inside that script.
  • Using qemu-debootstrap might be required.
  • grub boot loader won’t work on arm64 as far as I know. arm64 probably requires a different boot loader. Previous work on Whonix for arm64 / Raspberry Pi ( RPi ) - duplicate forum topic only helps to a degree. Because booting RPi seems different than booting arm64. (i.e. "normal arm64. Non-RPi arm64.)

Therefore an important prerequisite exercise would be to make grml-debootstrap create a bootable Debian arm64 VM image.

Thanks for the replies Patrick!

Lots of good places to start, will do so. Note, I am running Debian 10.4 arm64 fine on QEMU on the M1 Mac for now, but it was a prebuilt .qcow2 file. I like the idea of getting my own Debian arm64 image built and running, I’ll start there.

All the links are great resources, thanks again. Will revert as I make progress.

1 Like

I’m thinking of getting a Mac, but I noticed that support for Whonix (and I guess VMs in general) is limited or nonexistent. There was a thread [1] posted almost a month ago on Whonix support on M1 Macs. I’m not very savvy on virtual machines so I’m not too sure what possible progress has been made, but it looks like some people were working on getting Whonix to work on M1 Macs. Could anyone sum up where the progress is with that? And are there any alternatives I could use temporarily for using a secure and more anonymous VM on Mac?

(P.S it looks like there are some concerns about VirtualBox regarding security and freedom [2] so I think I’d prefer to use an alternative such as QEMU if possible)

[1] https:// forums.whonix .org/t/whonix-on-mac-with-arm-m1/11310
[2] https:// www.whonix .org/wiki/KVM#Why_Use_KVM_Over_VirtualBox.3F

I am still working on this, just been very busy with work lately. I’m now using this brew tap as it comes with a patched QEMU for ARM Macs.

I’ve got my head around grml-debootstrap, currently testing with some different setups for a debian.img which I can load using the above patched QEMU. However, it is taking really really long to build the debian image (from inside a Ubuntu VM). As in, so far running for about 5 hours… I think I need to use eatmydata to speed it up, although I’m not sure if the Whonix build scripts actually use that. I’ve read there’s a risk to using eatmydata. Anyway, the journey continues. Will report back once I have a working vanilla debian OS booting.

sudo grml-debootstrap \
            --arch "arm64" \
            --filesystem "ext4" \
            --force \
            --hostname "host" \
            --mirror http://ftp.ch.debian.org/debian \
            --nopassword \
            --release "buster" \
            --keep_src_list \
            --verbose \
            --vmfile \
            --vmsize "25G" \
            --target "./debian.img"
1 Like

Hi I think arm64 needs ovmf for booting AFAICT

That seems much too much. It’s important to have easy to use, quick development tools.
Maybe the download part is slow? Because I think the building part usually doesn’t take so long. At least not with an SSD.

If downloading the packages over and over is slow (and perhaps also producing too much traffic) consider using apt-cacher-ng. An environment variable is required for apt-cacher-ng when used with grml-debootstrap. Do you know how to set environment variables when using sudo?

eatmydata is speed-up is not that much if I remember right. eatmydata as far as I unerstand doesn’t lead to data loss unless there’s a power loss before the operating system had a chance to sync. Should be very much alright during experimentation inside a VM. Whonix build script has an optional --unsafe-io option to enable eatmydata.

I’d suggest using Debian buster (VM) since Whonix is supposed to be build on Debian buster too. (Soon Debian bullseye when that is in freeze or released.)

Okay, latest update.

I’ve made some good progress. I’m now running a Debian buster VM on QEMU on the Mac M1.
I ran this to get to the installer:

qemu-system-aarch64 \
         -machine virt,accel=hvf,highmem=off \
         -cpu cortex-a72 -smp 8 -m 4G \
         -device intel-hda -device hda-output \
         -nographic \
         -device usb-ehci \
         -device usb-kbd \
         -device virtio-net-pci,netdev=net \
         -device virtio-mouse-pci \
         -netdev user,id=net,ipv6=off \
         -drive "if=pflash,format=raw,file=./edk2-aarch64-code.fd,readonly=on" \
         -drive "if=virtio,format=raw,file=./hdd.raw,discard=on" \
         -drive "file=../debian/debian-10.9.0-arm64-netinst.iso,id=cdrom,if=none,media=cdrom" \
         -device virtio-scsi-device -device scsi-cd,drive=cdrom \
         -boot order=d

followed the usual Debian installation steps. Then I can just remove the SCSI cdrom device and use QEMU to boot from the hdd.raw file itself which now contains a bootable Debian. I did have to manually get the QEMU UEFI to load the grubaa64.efi but that’s a bit beside the point… (I am keeping track of all the commands I am running, and what each part does so I can write a nice post for future M1 users).

This works much better with the grml-debootstrap package, it now finishes building in a few minutes. Thanks @Patrick! In order to get a successful run, I’ve used the below command:

sudo KERNEL='none' NOKERNEL='true' UPGRADE_SYSTEM='no' GRUB_INSTALL='no' grml-debootstrap \
        --debopt "--verbose --include=initramfs-tools,eatmydata,apt-transport-tor,python3.7,gpg,gpg-agent" \
        --arch arm64 \
        --filesystem ext4 \
        --force \
        --hostname host \
        --nopassword \
        --release buster \
        --keep_src_list \
        --verbose \
        --vmfile \
        --vmsize 25G \
        --packages ./grml_packages \
        --target ./debian.raw

I’ve made this based on these whonix build steps.

This provides me with a debian.raw file. I understand that this does not have grub (if I don’t use GRUB_INSTALL='no' the build fails), nor a kernel (same here, if I don’t pass KERNEL='none' NOKERNEL='true') the build fails.

The reason for these failures (as far as I can tell) is that grml-debootstrap does not natively support arm64.

I know that I now need to get arm64 grub and an arm64 kernel onto this VM file. So far I’ve tried two things (neither of which worked).

  1. Mount the debian.raw file and try run grub-install.
sudo  mkdir -p "/mnt/debootstrap.grub"
sudo kpartx -asv ./debian.raw
sudo mount -o rw,suid,dev "/dev/mapper/loop0p1" "/mnt/debootstrap.grub"
sudo grub-install /mnt/debootstrap.grub --target arm64-efi
sudo sync
sudo umount /mnt/debootstrap.grub
sudo kpartx -d ./debian.raw >/dev/null
sudo rmdir /mnt/debootstrap.grub
sudo chown user:user ./debian.raw

Unfortunately I didn’t see any relevant grub files in the /mnt/debootstrap.grub/boot folder.

So, then I tried…

  1. Copy the files I have from my working Debian VM’s /boot folder into the mounted Debian filesystem’s boot folder (i.e. /mnt/debootstrap.grub/boot). This meant my debian.raw file now has (what looks like) the files necessary for booting. However, running this in a similar qemu command as above yields a UEFI error.

I know UEFI generally looks for an EFI partition (I have some experience in hackintoshing). However, I’m not really use how to go about making this partition in a file. Maybe that’s the reason the QEMU UEFI BIOS cannot find the required grubaa64.efi file?

I’m still continuing my trials. Just wanted to leave another update as I progress.

Note: @HulaHoop, seems I can use edk2-aarch64-code.fd to boot arm64 debian. Working well for me so far with the Debian VM I created using the net installer. I suppose this then should work when I can get a correct build from grml-debootstrap. Thanks though!

EDIT: quick update. Figured out I need to get that new partition setup and also probably make changes to /etc/fstab.

Some helpful resources I am going to use next to try get my debian.raw booting are:

Also seems it’s not too dissimilar to what this script does: https://gitlab.com/whonix/Whonix/-/blob/master/build-steps.d/2375_build_rpi_fs#L22

So, will report back once I’ve exhausted my efforts there.

And, finally, I suppose once I do have a way of getting this working we feed that back into a new script in the build-steps.d folder.


Quick follow-up, some success!

My assumptions were correct. Once I manually added grub and the kernel, I can boot from my newly created Debian arm64 image file, using QEMU on the M1.

If anyone is curious how I did that, you can follow the gist below.

Note: this is really rough around the edges. I’ll clean it up and add comments, but first I want to get Whonix booting.

So, I guess now I need to go through the rest of the Whonix build steps and adjust as needed in order to get Whonix booting similarly on my Mac host. Thanks for all the help so far!


Or much better. Add ARM support to grml-debootstrap.

1 Like

Ladies and gentlemen we got somewhere.

I added a new step into the build-steps.d folder which is basically a cleaned up / parameterised version of my gist above. Also needed to make a few small changes to other scripts.

Still to do:

  1. Ensure I haven’t broken builds of other archs / flavours. Do you guys have some CI actually?
  2. Get networking working between the Gateway and the Workstation using the vmnet-mac QEMU networking mode.
  3. Clean it all up, contribute back in terms of a GitHub PR and docs.

@Patrick Regarding grml-debootstrap, I will revert on that issue you created and contribute back there also. I think that might take a bit longer though, so I want to first get Whonix working fully on the M1.

1 Like

Alright, more progress. We have networking from the Workstation! And, also the workstation built perfectly with the changes I made to the build-steps.d scripts.

I have noticed the systemcheck is failing due to the onion-grater service. But, I cannot for the life of me figure out why. When running /usr/lib/onion-grater by itself it works fine… Is this a known issue by any chance in the latest built of Whonix? I am basing my work off

My QEMU commands look like this now by the way, just in case anyone else is following along. I’ll publish my build script changes soon also.

qemu-system-aarch64 \
         -machine virt,accel=hvf,highmem=off \
         -cpu cortex-a72 -smp 4 -m 2G \
         -device intel-hda -device hda-output \
         -device virtio-gpu-pci \
         -device usb-ehci \
         -device usb-kbd \
         -device usb-tablet \
         -device virtio-net-pci,netdev=external \
         -device virtio-net-pci,netdev=internal \
         -netdev user,id=external,ipv6=off,net= \
         -netdev socket,id=internal,listen=:8010 \
         -display cocoa \
         -drive "if=pflash,format=raw,file=./edk2-aarch64-code.fd,readonly=on" \
         -drive "if=pflash,format=raw,file=./edk2-vars-whonix.fd,discard=on" \
         -drive "if=virtio,format=raw,file=./Whonix-Gateway-XFCE.raw,discard=on"

qemu-system-aarch64 \
         -machine virt,accel=hvf,highmem=off \
         -cpu cortex-a72 -smp 8 -m 4G \
         -device intel-hda -device hda-output \
         -device virtio-gpu-pci \
         -device usb-ehci \
         -device usb-kbd \
         -device usb-tablet \
         -device virtio-net-pci,netdev=internal \
         -netdev socket,id=internal,connect= \
         -display cocoa \
         -drive "if=pflash,format=raw,file=./edk2-aarch64-code.fd,readonly=on" \
         -drive "if=pflash,format=raw,file=./edk2-vars-work.fd,discard=on" \
         -drive "if=virtio,format=raw,file=./Whonix-Workstation-XFCE.raw,discard=on"

Still lots to do overall, but getting there.

1 Like

Great progress!

Likely caused by sandboxing parameters in onion-grater’s systemd unit file. Similar to this platform specific issue:

Try commenting out this line:


After changing that file:

sudo systemctl daemon-reload && sudo systemctl restart onion-grater && sudo systemctl --no-pager status onion-grater

If that doesn’t help, try out commenting out all onion-grater systemd sandboxing.

Used to but Travis CI is shutting down.

But there was never any free (or affordable) CI which supports sudo/root, Debian based, supporting device-mapper. All of that would be required to even build a VM image. Let alone of booting a VM and testing if it’s functional. That would be difficult due to CI’s being based on virtualization and requirement for nested virtualization. Any contribution to improve CI support would be most welcome!

That indeed worked, the onion-grater service now runs fine, thanks! I had the same issue with sdwdate and resolved it the same way.

I see you mentioned this is only happening on certain architectures. Would you recommend I comment out those line then for the arm64 builds of Whonix? At least this way other users wouldn’t have the issue. Obviously, we should re-enable it once we can.

I see you’ve opened a superuser issue, hopefully someone else can help us debug it, I don’t have any experience there myself.

Regarding CI, I understand the difficulty finding such a provider. We would basically need to run our own CI infrastructure to get all of those features.

1 Like