Whonix on Mac M1 (ARM) - User Support (still unsupported at time of writing)

Alright, more progress. We have networking from the Workstation! And, also the workstation built perfectly with the changes I made to the build-steps.d scripts.

I have noticed the systemcheck is failing due to the onion-grater service. But, I cannot for the life of me figure out why. When running /usr/lib/onion-grater by itself it works fine… Is this a known issue by any chance in the latest built of Whonix? I am basing my work off 15.0.1.7.2.

My QEMU commands look like this now by the way, just in case anyone else is following along. I’ll publish my build script changes soon also.

qemu-system-aarch64 \
         -machine virt,accel=hvf,highmem=off \
         -cpu cortex-a72 -smp 4 -m 2G \
         -device intel-hda -device hda-output \
         -device virtio-gpu-pci \
         -device usb-ehci \
         -device usb-kbd \
         -device usb-tablet \
         -device virtio-net-pci,netdev=external \
         -device virtio-net-pci,netdev=internal \
         -netdev user,id=external,ipv6=off,net=10.0.2.0/24 \
         -netdev socket,id=internal,listen=:8010 \
         -display cocoa \
         -drive "if=pflash,format=raw,file=./edk2-aarch64-code.fd,readonly=on" \
         -drive "if=pflash,format=raw,file=./edk2-vars-whonix.fd,discard=on" \
         -drive "if=virtio,format=raw,file=./Whonix-Gateway-XFCE.raw,discard=on"

qemu-system-aarch64 \
         -machine virt,accel=hvf,highmem=off \
         -cpu cortex-a72 -smp 8 -m 4G \
         -device intel-hda -device hda-output \
         -device virtio-gpu-pci \
         -device usb-ehci \
         -device usb-kbd \
         -device usb-tablet \
         -device virtio-net-pci,netdev=internal \
         -netdev socket,id=internal,connect=127.0.0.1:8010 \
         -display cocoa \
         -drive "if=pflash,format=raw,file=./edk2-aarch64-code.fd,readonly=on" \
         -drive "if=pflash,format=raw,file=./edk2-vars-work.fd,discard=on" \
         -drive "if=virtio,format=raw,file=./Whonix-Workstation-XFCE.raw,discard=on"

Still lots to do overall, but getting there.

1 Like

Great progress!

Likely caused by sandboxing parameters in onion-grater’s systemd unit file. Similar to this platform specific issue:

Try commenting out this line:

https://github.com/Whonix/onion-grater/blob/master/lib/systemd/system/onion-grater.service#L54

After changing that file:

sudo systemctl daemon-reload && sudo systemctl restart onion-grater && sudo systemctl --no-pager status onion-grater

If that doesn’t help, try out commenting out all onion-grater systemd sandboxing.

Used to but Travis CI is shutting down.

But there was never any free (or affordable) CI which supports sudo/root, Debian based, supporting device-mapper. All of that would be required to even build a VM image. Let alone of booting a VM and testing if it’s functional. That would be difficult due to CI’s being based on virtualization and requirement for nested virtualization. Any contribution to improve CI support would be most welcome!

That indeed worked, the onion-grater service now runs fine, thanks! I had the same issue with sdwdate and resolved it the same way.

I see you mentioned this is only happening on certain architectures. Would you recommend I comment out those line then for the arm64 builds of Whonix? At least this way other users wouldn’t have the issue. Obviously, we should re-enable it once we can.

I see you’ve opened a superuser issue, hopefully someone else can help us debug it, I don’t have any experience there myself.

Regarding CI, I understand the difficulty finding such a provider. We would basically need to run our own CI infrastructure to get all of those features.

1 Like

Opened an initial PR: https://github.com/Whonix/Whonix/pull/439 - would appreciate some early feedback, or even if someone wants to trial it already.

I want to still do some more changes:

  1. Ensure those above mentioned services will run out-of-the-box.
  2. Update docs for building and running these images on an M1 Mac.
1 Like

Looks good overall!

It’s OK because it’s highly unlikely to break anything existing.

Seems pretty perfect.


Some nitpicks:

cp $binary_image_raw $orig_img
mkdir --parents $mpoint_efi
rm $orig_img

Few more.

Not a big deal. I could add the nitpick on top (but then hope the quoting doesn’t break it although very unlikely).


Optional, non-blocker:

That would be Whonix for macOS: Download and Installation?

That would be


(Please disagree should I suggest some wrong stuff. I am certainly not married to any of these opinions. :))

1 Like

Thanks for the reply and comments! I’ve made another commit which should address those.

Regarding the RPI file, I actually implemented my checks in such a way that that build should not be affected. I wasn’t aware that it wasn’t working anymore. I can remove it in my PR if you want?

Once you’re happy with PR, feel free to merge it. I may do some follow-up PRs which get the arm64 build into better shape (currently the .qcow2 files it produces doesn’t work, but at least .raw does).

Thanks for the doc links, I’ll get editing those soon also.
Really appreciate all your feedback, no problem with disagreeing! :slight_smile:

1 Like

Merged! :slight_smile:

Thank you, this might be great for arm64 support generally!
(I.e. unrelated to Mac hardware)

Btw now that you’ve found your way around Whonix’s build script to the level of porting to a newer platform, perhaps you’ll have suggestions on how it could have been better structured / documented to simplify contributions / easier to understand. (Ideally in a separate forum thread.)


https://github.com/Whonix/Whonix/commit/ebcd1dda74ad06c28a094bd28919d40bc8286fed

1 Like

Great, thanks Patrick!

Yes, I think in its current incarnation it should work for any future arm64 machines (provided QEMU arm64 works). I do expect we’ll see arm64 chips in Linux workstations soon seeing as Linus (and others) are pushing for this.

Regarding feedback on the build scripts, I actually found them quite approachable. They are well designed such that I was able to integrate without needing to understand every one, rather I just needed to know where to slot in. The RPI one definitely helped with that.

I’d be happy to actively contribute and improve M1 support, so as I go about doing that (via the repo itself and docs), I’ll be sure to provide any further feedback I have. Thanks for you help, and great project! :slight_smile:

1 Like

Alright, I’ve made some changes to the wiki pages:

I’m not too used to this Wiki markdown language, so please do feel free to clean up any of the formatting. I’m sure I’ll get better at it. While I’m not super happy with the state of the docs, I think it’s better to have something in there for now at least, especially while it’s fresh on my mind and I have some time. Will update them as I go along.

For some reason I don’t see an “Edit” option on this page: Build Configuration - Whonix - I’d like to especially arm64 it under “Platforms Choice”. Is this restricted?

Thanks again!

1 Like

Great!

Removed protection from Build Configuration - Whonix. Can now be edited.

Whonix ™ for macOS: Download and Installation could also be split into two different pages.

  • Intel based Mac
  • M1 based Mac

Not sure that would make sense? Might depend on:

Will it be possible / is it planned to make this work with either/and/or KVM / virt-manager / VirtualBox?

1 Like

Oracle hasn’t announced any such plans. That doesn’t seem likely at this time at all. That could be years, if ever.

https://forums.virtualbox.org/viewtopic.php?t=98742

https://www.virtualbox.org/ticket/20192


Potential build speed up coming to mind for developer builds:

…but probably miss out on platform specific packages:
Existing Ports of and Porting Whonix to other Architectures

1 Like

Thanks Patrick, I updated the Build Configuration page now with some small changes.

Regarding the macOS page, yes, I think it could do with a larger restructure. I’ll tackle this probably next weekend, but didn’t have much time this weekend so just got the Apple Silicon steps in quickly so they are not only on my machine / in my head. :wink:

If VirtualBox releases ARM compatibility at some point, I would definitely port it there. It would be much more user friendly than QEMU. I’m also considering https://getutm.app/ - it uses QEMU under the hood so would be easier to get it working there and more user friendly.

KVM is a linux hypervisor, so I don’t see it ever working on macOS. HVF is the macOS implementation and the QEMU commands I added into the Whonix wiki use HVF already.

1 Like

That’s a handy flag for the build script, will keep it in mind, thanks!

@Patrick would it be possible to add those arm64 images to https://download.whonix.org/ at some point? I think we could then already make an “easier” version of the Apple Silicon setup.

2 Likes

Is in -XX-<git commit hash>.raw the git commit hash part inconvenient? (For intermediate documentation writing?) The only reason I originally implemented that is preventing users to build from git master (or other arbitrary git commits) and then wondering why their build is different from git tag releases.

File names are configurable but I guess setting an environment variable for that is also inconvenient.

Instead to clarify to users “caution, not building from a tag”, the git commit hash could be replaced by “untagged”. That would also sufficiently indicate “caution, custom build”.

(Nothing fundamentally wrong with builds from non-tags as long as knowing that.)

(Going to answer other parts later.)

That’s an incredible development. Thanks for creating this. It would be a pleasure for me to handle aarch64 KVM Builds. Will you be reachable for any bug troubleshooting in case something breaks down the line?

2 Likes

Thanks for the kind words @HulaHoop, it’s been a lot of fun working on it. Yes, of course please feel free to reach out.

Do you guys have gitter or some other direct messaging platform for quick chats?

2 Likes

The qemu-system-aarch64 command lines for gateway and workstation are crucial to be correct. In theory if wrong could even produce a leak. How have these been generated / figured out?

Where these created / based on using virsh domxml-to-native qemu-argv? That would be great because then it would be similar to Whonix KVM xml files:

https://github.com/Whonix/whonix-libvirt/tree/master/usr/share/whonix-libvirt/xml

A lot thought on ideal configuration over the years was put into these by @HulaHoop.

No. Development is all in forums.


They have been modelled based on the XML files Whonix currently uses, however I could not map them 1-to-1, there are some differences with QEMU on macOS.

For example, neither bridge nor tap network backends work (at least easily, apparently there are some hacks for it to work), so I had to use user-space socket connections based on QEMU’s SLIRP.

I’m not too aware of what leaks this could create, maybe @HulaHoop knows more?

Okay, thanks!

2 Likes

Alright so some good news I was able to generate some Libvirt configs using the pre-built Debian Openstack images. The results should resemble x86 level of isolation that way. Since only SLIRP is available on Mac, some leaktesting is recommended just in case: Leak Tests


Are we currently getting raw files from the build script? I’m sure KVM can use them too and can even generate snapshots on top of it. However qcow2 would be ideal for compactness and functionality reasons if possible. Take your time. The plan is for one image to be able to support different OSs of the same arch.

2 Likes