Fixing the Desktop Linux Security Model

Fixing the Desktop Linux Security Model

Whonix is a security, privacy and anonymity focused Linux distribution. Recently, we’ve been focusing a lot on important security hardening measures and fixing architectural security issues within the desktop Linux security model. Any Linux distribution can be affected by these issues.

The Issues

There is a common assumption that Linux is a very secure operating system. This is very far from the truth for various different reasons. Security guides aren’t a solution either.

There is no strong sandboxing in the standard desktop. This means all applications have access to each other’s data and can snoop on your personal information. Most programs are written in memory unsafe languages such as C or C++ which has been the cause of the majority of discovered security vulnerabilities and modern exploit mitigations such as Control-Flow Integrity are not widely used.

The kernel is also very lacking in security. It is a monolithic kernel written entirely in a memory unsafe language and has hundreds of bugs, many being security vulnerabilities, found each month. In fact, there are so many bugs being found in the kernel, developers can’t keep up which results in many of the bugs staying unfixed for a long time. The kernel is also decades behind in exploit mitigations and many kernel developers simply do not care enough.

On ordinary desktops, a compromised non-root user account which is member of group sudo is almost equal to full root compromise as there are too many ways for an attacker to retrieve the sudo password. Usually, the standard user is part of group sudo which makes this a massive issue and makes a sudo password almost security theater. For example, the attacker can exploit the plethora of keylogging opportunities such as X’s lack of GUI isolation, the many infoleaks in /proc, use LD_PRELOAD to hook into every process and so much more. Even if we mitigate every single way to log keystrokes, the attacker can just setup their own fake sudo program to grab the user password.

Due to this, the Whonix project has been investing a lot of time into developing proper security measures to help fix these issues.

The Kernel

The kernel is the core of the operating system and has many security issues as discussed above. The following are details about our efforts to improve kernel security.

hardened-kernel

hardened-kernel consists of hardened configurations and hardening patches for the Linux kernel. There are two kernel configs, hardened-vm-kernel and hardened-host-kernel. hardened-vm-kernel is designed specifically for virtual machines (VMs) and hardened-host-kernel is designed for hosts.

Both configs try to have as many hardening options enabled as possible and have little attack surface. hardened-vm-kernel only has support for VMs and all other hardware options are disabled to reduce attack surface and compile time.

During installation of hardened-vm-kernel, it compiles the kernel on your own machine and does not use a pre-compiled kernel. This ensures the kernel symbols in the compiled image are completely unique which makes it far harder for kernel exploits. This is possible due to hardened-vm-kernel having only VM config options enabled which drastically reduces compile time.

A development goal is that during installation of hardened-host-kernel, the kernel is not compiled on your machine but uses a pre-compiled kernel. This is because the host kernel needs most hardware options enabled to support most devices which makes compilation take a very long time.

The VM kernel is more secure than the host kernel due to having less attack surface and not being pre-compiled but if you want more security for the host, it is recommended to edit the hardened host config, enable only the hardware options you need and compile the kernel yourself. This makes the security of the host and VM kernel comparable.

These kernels use the linux-hardened kernel patch for further hardening. The advantages of this patch includes many ASLR improvements, more read-only kernel structures, writable function pointer detection, stricter sysctl configurations, more sanity checks, slab canaries and a lot more.

We are also contributing to linux-hardened and adding more hardening features. Our contributions include disabling TCP simultaneous connect, restricting module auto-loading to CAP_SYS_MODULE, Trusted Path Execution (TPE), restricting sysfs access to root, restricting perf_event_open() further to deny even root from using it and many more in the future.

security-misc

security-misc (wiki) enables miscellaneous security features for better kernel self-protection, attack surface reduction, entropy collection improvements and more. It doesn’t just harden the kernel but also various other parts of the operating system. For example, it disables SUID binaries (experimental, soonish default) and locks down root user access to make root compromises far harder. It also uses stricter mount options for various filesystems and stricter file permissions.

Linux Kernel Runtime Guard (LKRG)

Linux Kernel Runtime Guard (LKRG) is a kernel module which performs runtime integrity checking of the kernel and detection of kernel exploits. It can kill entire classes of kernel exploits and while LKRG is bypassable by design, such bypasses tend to require more complicated and/or less reliable exploits.

tirdad

tirdad is a kernel module that aims to prevent TCP Initial Sequence Number (ISN) based information leaks by randomizing the TCP ISNs. These issues can be potentially catastrophic for anonymity and long-running cryptographic operations.

User space

User space is the code that runs outside of the kernel such as your usual applications.

apparmor-profile-everything

apparmor-profile-everything is an AppArmor policy to confine all user space processes, including even the init. This allows us to implement strict mandatory access control restrictions on all processes and have fine-grained controls over what they can access.

This is implemented by an initramfs hook which loads an AppArmor profile for systemd, the init.

apparmor-profile-everything gives us many advantages by limiting what an attacker can do if they compromise parts of the system. The benefits are not just for user space though. We can also protect the kernel to a great degree with this by blocking access to dangerous capabilities that allow kernel modification such as CAP_SYS_RAWIO, having fine-grained restrictions on kernel interfaces known for information leaks such as /proc or /sys and so much more. apparmor-profile-everything even allows us to deny access to the CAP_NET_ADMIN capability which prevents even the root user from leaking the IP address on the Whonix Gateway (it would now require a kernel compromise).

With apparmor-profile-everything, the only reasonable way to break out of the restrictions is by attacking the kernel which we make much harder as documented above. The root user cannot disable the protections at runtime as we deny access to the required capabilities and files.

sandbox-app-launcher

sandbox-app-launcher is an app launcher that starts all user applications in a restrictive sandbox. It creates a separate user for each application ensuring they cannot access each other’s data, runs the app inside a bubblewrap sandbox and confines the app with a strict AppArmor profile.

Bubblewrap allows us to make use of kernel sandboxing technologies called namespaces and seccomp. Namespaces allow us to isolate certain system resources. All apps are run in mount, PID, cgroup and UTS namespaces. Fine-grained filesystem restrictions are implemented via mount namespaces and AppArmor. Seccomp blocks certain syscalls which can greatly reduce kernel attack surface among other things. All apps by default use a seccomp blacklist to block dangerous and unused syscalls. Seccomp isn’t just used for bluntly blocking syscalls either. It’s also used to block unused socket families by inspection of the socket() syscall, dangerous ioctls such as TIOCSTI (which can be used in sandbox escapes), TIOCSETD (this can increase kernel attack surface by loading vulnerable line disciplines) and SIOCGIFHWADDR (this can retrieve the user’s MAC address which is a privacy risk) by inspection of the ioctl() syscall and even strict W^X protections by inspection of the mmap(), mprotect() and shmat() syscalls. AppArmor is used to apply W^X to the filesystem and prevent an attacker from executing arbitrary code. Apparmor also gives fine-grained controls over IPC signals, dbus, UNIX sockets, ptrace and more.

It doesn’t just stop there. sandbox-app-launcher implements an Android-like permissions system which allows you to revoke certain permissions such as network access for any application. During installation of new programs, you are asked which permissions you wish to grant the application.

hardened_malloc

hardened_malloc is a hardened memory allocator created by security researcher, Daniel Micay. It gives substantial protection from memory corruption vulnerabilities. It is heavily based on the OpenBSD malloc design but with numerous improvements. Daniel Micay is a respected security researcher who has put a lot of work into security and is the creator of GrapheneOS (formerly CopperheadOS), linux-hardened and more.

Whonix installs hardened_malloc by default but it is not used much yet. In the future, we may preload it globally and use it for every application.

VirusForget

VirusForget deactivates malware after a reboot from a non-root compromise by restoring sensitive files. Without this, it’s possible for malware to easily create a persistent, system-wide rootkit by for example, modifying LD_PRELOAD in ~/.bashrc to hook into all user applications.

Verified Boot

Verified boot ensures the integrity of the boot chain by verifying the bootloader, kernel and initrd with a cryptographic signature. It can be extended further and verify the entire base OS, ensuring all executed code has not been tampered with but this extension is unlikely to be implemented due to the layout of traditional Linux distributions.

User Freedom

All of our security features can be reverted by the user if they prefer freedom over security by choosing the necessary boot modes. This is not a security risk and attackers cannot abuse this as it can only be done with local access.

Contributing

There is still a lot more work to be done and we need your help. Contributions would be greatly appreciated. The implementation of sandbox-app-launcher, packaging of hardened-kernel and verified boot are some of our main issues. Qubes support for hardened-kernel, apparmor-profile-everything, LKRG, tirdad and security-misc’s boot parameters are missing. Also see the list of open tasks.


Edit by Patrick:

5 Likes

The post I made was posted onto the GrapheneOS Matrix/IRC channel by someone else. Daniel Micay responded and gave some great suggestions.

A summary of what he said:

  • AppArmor can never provide the same kind of security as SELinux.
  • Verified boot only for the kernel is meaningless/security theater/doesn’t help.
  • Debian is a massive issue, doesn’t provide proper security updates and isn’t a viable choice for building anything reasonably secure. It’s a waste of time to start from Debian.
  • We should define a base system, make it read-only and then build verified boot, MAC etc.
  • Musl should be used as the libc instead of glibc. Glibc is buggy and overly complex. Musl is missing some very minor security features present in glibc (the glibc implementations aren’t good anyway) so we should add the missing security features to musl.
  • Clang/LLVM should be used as the toolchain so we can enable modern mitigations like CFI.
  • Systemd is a big problem. They’re hostile towards modern security approaches, adds a huge amount of attack surface / complexity and is developed by incompetent people. PID1 can be tiny and the minimal requirements are like 500 lines of code.
  • There shouldn’t be anything like apt. It doesn’t fit into a secure OS design. GPG is insecure and shouldn’t be involved in verification.
  • We should provide base OS updates with update_engine. Everything else should be a sandboxed application updated with an application package manager.
  • Do the applications like how android does APEX so verified boot extends to applications. APEX ships components as little filesystem images that are mounted and use dm-verity.
  • So the base OS is a filesystem image, updating with A/B updates using update_engine and applications should be little filesystem images too.

I think this is pretty self-explanatory.

Systemd only supports glibc so to switch libc, we also need to switch init. Daniel didn’t have a suggestion for which init to use though.

Even if that is the case, if SELinux is too hard to mortals to write, then that theoretic advantage doesn’t help.

Yes. Only a first step. Would require something like ELF signature check.

Fatalism.

Similarly it can be argued it’s a waste to build on top of Android: [2]

  • Android isn’t “real” Open Source. ASOP may be technically Open Source but the behavior is unique and not Open Source alike at all.
  • Developed by a company, Google that is one of the biggest violators of privacy ever, among other evil behavior.

But he’s building GrapheneOS on top of Android anyhow.

I don’t think there’s any base Linux distribution suitable for Whonix to build on top of:

Notes here:

Replacing Debian might be worthwhile if there is any distribution that has:

[1] I would like to build a space ship, explore the universe, make peaceful contact with other space traveling specifies should they exist. But currently doesn’t look realistic.

Requires a base distribution which does that.

systemd CVE’s don’t look so bad. Not judging by numbers but by issues if they were an actual issue for Whonix. Doesn’t look so bad. More so when limiting it to the core of systemd.

systemd support tons of security features in a usable way such as seccomp, capabilities, limits, private-tmp, private devices, read-only directories, and whatnot.

Unless there’s something really better then systemd (which I doubt) and/or resources to port to it, Whonix is settled with systemd. This is strongly related to the base distribution issue. If there was a more secure base distribution that decided not to use systemd and had something else, then this might be doable.

Same as [1]. Basically, re-base on Android? I am not convinced of Android due to [2].

1 Like

No, it’s doable. SELinux is just harder to get the hang of.

dm-verity would probably be the best approach.

It’s not fatalism. He’s just suggesting we rebase on something sane.

Nobody suggested to build on top of Android.

That doesn’t make sense. Android is real open source.

That has literally nothing to do with android. It’s to do with other Google apps, not the OS.

That’s not true at all.

https://android.googlesource.com/

Linux and even the protocols used for HTTP/2 and HTTP/3 are developed by Google. Are we going to just move away from Linux and the web too?

Google only violates privacy with some of their services. AOSP itself doesn’t have any issues.

Because AOSP gives a great baseline security model to build on.

That criteria isn’t necessary. If we find a good distribution with proper security updates, we can use it as a template then build those security features on top of it.

That’s not the same. This is doable and has already been done.

Switching libc does not.

Compare it to other inits.

A single vulnerability in runit in 2006 vs 20 consistent vulnerabilities in systemd.

Measuring only by CVEs also isn’t the best.

Lennart also won a pwnies award for lamest vendor response in 2017 https://pwnies.com/archive/2017/winners/

Those are trivially re-implemented with bubblewrap.

Seccomp: --seccomp
Capabilities: --cap-add --cap-drop
Private-tmp: --tmpfs /tmp
Private devices: --dev /dev
Read-only directories: --ro-bind /dir /dir

Having some sandboxing features can’t make up for the large attack surface systemd adds.

All of systemd/src/core (which I assume is the main init part, hard to tell due to the way systemd is layed out) is 53,044 LOC compared to OpenRC’s 13,423 LOC. All of systemd is over 400,000 but that wouldn’t be a fair comparison.

Other inits such as OpenRC, runit etc. are available in Debian.

That is not what’s being suggested.

madaidan via Whonix Forum:

dm-verity would probably be the best approach.

That is good. Depends which one can be implemented.

It’s not fatalism. He’s just suggesting we rebase on something sane.

Which doesn’t exist. That’s why I call it fatalism.

That doesn’t make sense. Android is real open source.

That has literally nothing to do with android. It’s to do with other Google apps, not the OS.

That’s not true at all.

https://android.googlesource.com/

Well, they used to play games:

Linux and even the protocols used for HTTP/2 and HTTP/3 are developed by Google. Are we going to just move away from Linux and the web too?

Google only violates privacy with some of their services. AOSP itself doesn’t have any issues.

Building on top of google is risking to build on top of sand. Ask
Huawei. See:

https://onezero.medium.com/the-huawei-disaster-reveals-googles-iron-grip-on-android-b1ccee34504d

Business interest had overweight a solid technical design where such as
license removal wouldn’t lead in such as disaster for Huawei.

Linux and even the protocols used for HTTP/2 and HTTP/3 are developed
by Google. Are we going to just move away from Linux and the web too?

It’s colorful. Not black and white. Building on top of ASOP is risky,
dnless ready to maintain all oneself (or likely someone else doing that)
is risky.

I couldn’t fork/maintain if in theory Debian went evil but then I think
it’s likely others would be able to and do that.

With protocols such as HTTP/2 and HTTP/3 which are implemented by
browsers and web servers there’s now little power that Google could
abuse. Seems very different to me.

Google has already shown that they play games. Therefore I wouldn’t risk
to rebase on Android.

That criteria isn’t necessary.

Necessary no, but a justification for the workload.

Compare it to other inits.

Freedesktop Systemd : Security vulnerabilities, CVEs

Runit : Security vulnerabilities, CVEs

A single vulnerability in runit in 2006 vs 20 consistent vulnerabilities in systemd.

Would be more useful to compare runit to systemd core, pid 0 (init).

I am having difficulties finding any vulnerabilities in systemd core.

That list looks huge but for example “systemd-resolved through 233
allows remote attackers to cause a denial of service (daemon crash) via
a crafted DNS response with an empty question section.” That was a bug
in systemd-resolved, not systemd core. systemd-resolved wasn’t / isn’t
needed / used in Whonix.

Measuring only by CVEs also isn’t the best.

Lennart also won a pwnies award for lamest vendor response in 2017 https://pwnies.com/archive/2017/winners/

[1] This goes back to:

As per:

Those are trivially re-implemented with bubblewrap.

Seccomp: --seccomp
Capabilities: --cap-add --cap-drop
Private-tmp: --tmpfs /tmp
Private devices: --dev /dev
Read-only directories: --ro-bind /dir /dir

Having some sandboxing features can’t make up for the large attack surface systemd adds.

systemd has excellent functionality. Whonix source code:

find . -type f -not -iwholename '*.git*' | grep systemd | wc -l shows
143 matches. Re-implementing that… Moving away from systemd isn’t
trivial at all and unrealistic with current project resources.

All of systemd/src/core (which I assume is the main init part, hard to tell due to the way systemd is layed out) is 53,044 LOC compared to OpenRC’s 13,423 LOC. All of systemd is over 400,000 but that wouldn’t be a fair comparison.

Same as [1].

Other inits such as OpenRC, runit etc. are available in Debian.

But not well supported by packages because not the default init system.

2 Likes

It might exist. We should look for one.

That’s not playing games. They need to develop the thing first. Would you rather them make an empty git repo?

Again, it’s about google’s other apps, not the OS.

The Huawei thing was forced by the US government away. Google can’t just ignore the government.

Nobody’s suggesting to.

The init system is not the same as any random package.

madaidan via Whonix Forum:

It might exist. We should look for one.

I’ve looked recently and didn’t see much. If it had reasonable
popularity then it would be listed under

That’s not playing games. They need to develop the thing first. Would you rather them make an empty git repo?

If there’s a git commit - at least if not fixing security relevant
issues - git push briefly after being commited. No public release dates
while partners get access earlier already.

Again, it’s about google’s other apps, not the OS.

The OS gets less and less. Being replaced by more and more proprietary
stuff as per references posted previously.

The Huawei thing was forced by the US government away. Google can’t just ignore the government.

A solid (just normal, no stunts) Open Source design could not have be
ordered away by US government. Huawei can still use Linux - because it’s
real Open Source. (Overlooking blobs in source repository.)

1 Like

Partners didn’t get access earlier. Nobody got access except Google and they had good reason for it:

We felt that open sourcing it at that point would be difficult because people would try to wedge it into phones and create a bad user experience.

i.e. if they open sourced it, people would try to use it before it’s ready so they only open sourced it once finished.

The apps included in AOSP were never complete implementations. They were just templates and were meant to be replaced.

Google’s other proprietary apps are irrelevant to AOSP. AOSP doesn’t even include them.

Huawei could still use AOSP because it’s open source. They just couldn’t use Google’s proprietary apps.

Complain about Google’s apps being proprietary if you want but don’t pretend it’s the same as the OS.

Would take me a time to get to the bottom of a labyrinth of a said b and c said d…

I actually like Lennart Poettering’s vision a lot he laid out here:

Also the functionality of systemd core, the drop-in config functionality and its success (distribution adaption) doesn’t make it look incompetent at all.

In comparison to Android, the people who reported http://cloak-and-dagger.org, judging by the timeline also didn’t seem to be happy with Google’s way to handle the security vulerability.

Pwnie Awards are plenty as per Pwnie Awards - Wikipedia Debian was also on the list

And also Linux

Brown, Bob (July 31, 2009). “Twitter, Linux, Red Hat, Microsoft “honored” with Pwnie Awards”. NetworkWorld. Retrieved January 3, 2013.

Therefore it’s hard to use that as a criteria for judgement.

That makes my point that they don’t publish git master branch as normal Open Source projects do.

If they are worried, they should just request their trademark replaced with something not to be confused with Google.

Even these getting less and less and harder to use as reported by programmers. Harder and harder to actually make use of AOSP. In comparison, Debian has a ideological commitment to Libre Software. Google not so much. Therefore if I had to bet, I guess it’s more likely that Debian or successors will be usable longer than Android (or successors).

On AOSP website I don’t see any download links. Only source code. Not a single phone is sold with AOSP. Also no pre-made, downloadable ROMS from AOSP website. Doesn’t seem like a usual Open Source project.

It’s similar to https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=898259#17 - should Microsoft be an upstream for Debian? Is Android an OK upstream? Technically, “none of this necessarily contradicts Debian’s formal standards / Open Source / Free Software definition”.

I could ignore everything else and go with Android for an upstream but there are other factors. Other concepts such as tivoization, malicious feature, antifeature, tyrant software, treacherous computing or DRM (digital restrictions management) are less popular and less well defined. What could possibly go wrong. And therefore Android doesn’t seem like a good choice because in this case even if these concepts are less well defined, it seems a more clear case to me. The question is, is it sane to totally discard any conflict of interest because it is Open Source? Some say yes, I say no.

1 Like

Most distributions don’t care for security so that’s not a good indicator. We could probably even create our own init with compatible syntax with systemd if you want.

I don’t see why you’re comparing it to android.

Debian’s was because of a mistake. It isn’t the same and we’ve already talked about replacing Debian.

And we know Linux is a mess too hence hardened-kernel.

madaidan via Whonix Forum:

Most distributions don’t care for security so that’s not a good indicator.

And security-fussed distributions are also using systemd or don’t exist.

We could probably even create our own init with compatible syntax with systemd if you want.

The main issue is this one:

Then also distributions that went non-systemd such as Devuan. That
doesn’t look easy at all.

And then there were also a lot people working on that.

I don’t see any reason to pick on the init system as there’s a lot
things which arguably could use a rewrite:

Due to this it’s hard to pick which projects to reinvent. Then also
project resources are very scare. Therefore I cannot take up tons of
complex projects.

Issues mentioned in this thread are also ambitious to say the least. I
am also convinced it doesn’t make sense to debate priorities since these
debates are to complex and endless. There’s always an argument which
trumps other arguments.

I guess it’s about to define what I can do with the Whonix project.
There’s things I am good at and things where I am not good at. For now,
certainly I cannot become upstream for tons of new packages and/or
significant amount of C / assembler code.

  • research and implementation project
  • take existing components available from Debian, rare exceptions
  • reconfigure for anonymity/privacy/security according to research results
  • use things which are already documented elsewhere and feasible to
    implement
  • no huge architectural changes such as recompilation of packages from
    Debian / don’t replace systemd
  • if there’s a more secure base distribution, worthwhile, suitable for
    re-basing, rebase to it

I don’t see why you’re comparing it to android.

Because suggestions originate from Daniel Micay who works on GrapheneOS
which is Android based. He has a point in his analysis and there’s lots
of other valid points here too, but I still think it’s not feasible to
address them all at once since there’s always another argument around
the corner invalidating the whole design.

Debian’s was because of a mistake. It isn’t the same and we’ve already talked about replacing Debian.

Debian as far I know didn’t apply any organization or policy level fixes
which would prevent such an issue in future.

1 Like

Being security-focused doesn’t mean they’re interested in doing big security changes. The only one I can think of is AOSP which uses its own init.

The init is the first process started and one of the most privileged processes. A vulnerability in systemd can be critical.

We should fix as much as possible. We don’t have to rewrite the entire OS just to fix one important part.

1 Like

If I may comment as an outsider, it seems as though you guys are getting different subjects mixed up.

The subject line is about the system security model. As the introductory post said, at the moment, Whonix uses a traditional Linux model inherited from Debian, with essentially no isolation between applications. And under that model, a lot of the things you’re talking about seem relatively unimportant.

You seem to have gone afield from the question of making a model change, to talk about point hardening things. And maybe that’s right, because I don’t think you can actually improve the model very much under Debian, Android, or anything else that’s really available. But if you’re going to worry about point hardening, I think you should think first about hardening things that are actually going to be under attack.

Threats and protections

Probably the most interesting remote attackers are people at the “other end” of actual Tor connections. Most of the attack surface available to them is in actual applications, like Web browsers and programs that might be used to view or manipulate downloaded files.

You ship the Tor browser, various archive programs, PDF utilities, media players, etc… and you allow users to install almost anything Debian offers. You can reasonably expect users to downloading Microsoft Word document and viewing them in LibreOffice, to view PDFs with random viewers, to play potentially hostile audio and video files, and to do all sorts of other risky things with applications.

Those application programs can leak data over Tor. More relevant to the sorts of issues you’ve been talking about, though, is the fact that if something manages to completely break through one them, then all of the interesting information inside the workstation becomes available by design (and what’s brilliant about Whonix is that what’s exposed at least does not include the real IP address or computer serial number or whatever).

If you don’t have isolation between the applications, then things like kernel bugs and systemd bugs really don’t matter very much, especially not on the workstation.

The things you’ve talked about that actually harden the applications are things like libc (but I suspect the vast majority of the applications’ bugs are in their own code) and stack protections (which are mostly compiler options that you could perhaps turn on). Changing the model to isolating applications would be a win… but I’m going to argue that that’s too hard.

Kernel hardening and the init process are second order issues. They will only really start to matter after you have application isolation.

On the workstation, there’s almost zero remote kernel attack surface. There is truly zero remote systemd attack surface. If a remote attacker can interact with the workstation kernel or systemd enough to really exploit them, that implies that that attacker already has all the interesting information in the workstation, and has therefore already owned “the user’s data”

The only reason to try to elevate privilege once you were inside the workstation would be to try to attack the host via hypervisor bugs… which may or may not actually require you to be running in kernel mode at all. The hypervisor presents a really large, really weird attack surface for any code code at all running in either the workstation or the gateway VM. And it’s an attack surface that’s easy to forget about.

The gateway VM is a little different from the workstation, but systemd bugs and most kernel bugs still seem relatively uninteresting there.

Both remote attackers and attackers who’ve already owned the workstation will have reasons to target the gateway.

Truly remote attackers, on the other ends of Tor connections, are going to have to get into the workstation first to get access to most of the juiciest targets on the gateway. They have almost no direct access to the gateway’s kernel, only a bit more to the Tor process, and none at all to services like systemd. To even poke the TCP/IP stack, they would have to own the workstation first.

I’m not going to say that nobody at all might attack the gateway from a more local “remote” position… but even an attacker on the LAN only has access to the gateway’s IP stack and relatively limited parts of its Tor process.

So, if you were going to talk about changing OS platforms, I’d suggest you think much more about application hardening and isolation, and much less about stuff that’s far from the attack surface… like the init system.

And now I’m going to argue that you can’t do the isolation.

Isolating in Debian

To isolate applications in Debian or any traditional Linux environment, you’d have to deal with a bunch of stuff that would break applications if you changed it. Long before you hard to worry about the kernel or init system, you’d run into probably-insurmountable issues with things like…

  • X11. X doesn’t isolate its clients at all. Any program can mess with the keyboard, clipboard, other applications’ displayed windows, and who knows what else. There’ve been various attempts to fix it, but they’re hard to get working and most of them don’t get maintenance. It was a bad design even by 1985 standards. I had to tell the CIA as much in about 1990.
  • The fact that applications largely expect to share the file system name space with one another, and can get pretty unusable if they don’t share at least a lot of it.
  • D-Bus. This passes around a huge number of messages that ask for who-knows-what, which results in a complicated security policy, much of it defined per-endpoint by developers who may not have much clue.

Basically I don’t think you can do it, period. It’s too big a project.

I also don’t think you could do it in Android.

Isolating in Android

Android does try to isolate applications from one another. An application at least has a chance of keeping a file private. Something properly written to take advantage of the isolation can get something out of it. A nice contained application like a cryptocurrency wallet can get something out of it.

… but you’d be forced to give your users a lot of applications that weren’t written with so much care, and the system is complicated enough that not only will it probably have breaking bugs, but it almost guarantees bugs in how applications use it.

It’s not really true that Android has a model for isolating applications. What it has is a huge collection of shared resources and IPC endpoints, each with its own ad-hoc set of security restrictions. The whole thing is kind of reminiscent of D-Bus, but even more weird and complicated and used by far more programs. Those resources and restrictions change so much that Android has to formally version the API; each Android app actually declares which version it targets.

A lot of Android’s IPC-based APIs let one application ask another, or some part of the system, to do things that might result in network traffic… and the recipients of those requests rarely worry at all about what information that traffic will leak. Lots of services are architected to expect to work with “the cloud” in various ways. You can take that stuff out (which is what I think GrapheneOS tries to do), but you’re fighting the architecture all the way.

Furthermore, Android’s best supported method of sharing data files among applications, the one that’s by far the most commonly used by actual programs your users might want to run, is to dump them all into a big shared directory tree with no meaningful security controls at all. There’s this nifty “provider” API that nothing uses… and then there’s the unstructured shared storage that everything uses. And even when isolation is enforced, it tends to be more like “give application A access to all data in application B”, rather than “give application A access to this particular document”.

So you get limited practically useful application isolation from Android.

I also agree with Patrick that it would be a bad idea to become dependent on Android. In the end, Google will take Android in whatever direction benefits Google, and that’s not likely to include caring at all about whether the core system is in any way useful for anonymity. It may not include caring about whether AOSP without the Google apps and services is even usable at all. And it probably won’t include caring about privacy in general, or at least about privacy from Google.

For that matter, a lot of real third party Android apps are actively hostile to anything resembling anonymity. Dominant apps actively try to circumvent any privacy controls that Google does bother to put in place. Not only are those apps dangerous in themselves, but they still provide the functionality that users need… which makes it hard to generate demand for alternative apps with better privacy, and drives the overall ecosystem away from what youw want.

A random Debian program has more access to other apps’ data… but is far less likely to be deliberately trying to thwart the aims of Whonix than a random Android program.

You could end up having to maintain a huge amount of divergent code if you went with anything based on Android.

Upshot on isolation

To be honest, I can’t even think of any realistically usable operating system that has a good isolation model. The closest thing would be something like Genode’s Sculpt, and that’s just not ready. In fact, I think you get more actually useful security by being integrated into Qubes than you’d get by going to, say, Android. At least in Qubes the user can take a document off into an isolated VM and work on it.

You may be able to do some ad-hoc sandboxing with namespace-based stuff like bubblewrap. But be careful; that kind of thing is complicated.

Some random comments on hardening

You’ve talked about syscall filters like Apparmor and SELinux (and the more granular modes of things like bubblewrap), and about anti-buffer-overflow measures like canaries, poisoning, ASLR, “check-before-call”, stack frame reorganizations, etc, etc, etc.

All of these “hardening” measures are hacks. They make assumptions about the behavior of programs that aren’t guaranteed to follow those assumptions. If they work, they work. If they don’t, they don’t. And you have no real way to know whether you’ve gotten them right. The buffer overflow protection ones are especially suspect; I don’t think there are any that don’t have relatively generalizable workarounds.

I’m not saying you shouldn’t use hardening or sandboxing… but if you have a choice between “hardening” a random half-assed application (or library), and finding a good, well-written, well-analyzed, well-tested application written using relatively fail-safe tools, I think you’re nearly always going to get better security with the intrinsically safer application.

I don’t think it’s fair to say that SELinux is harder to set up than AppArmor. There’s nothing intrinsically complicated about what SELinux does. And SELinux comes out of the box with a reasonably nice granular default policy… or at least it does on Red Hat/CentOS/Fedora. I don’t really know about the SELinux policy available under Debian.

1 Like

You’ve written a lot about how kernel bugs don’t matter much when there’s no application isolation yet we are working on application isolation as mentioned in the post. Read the apparmor-profile-everything and sandbox-app-launcher sections.

We’re aware of X11 and it’s already been discussed to death. We’re likely going to switch to Wayland but the current issue is that XFCE doesn’t yet support it. If we can’t switch, I can add X11 sandboxing to sandbox-app-launcher via a nested X server like xpra.

AppArmor supports DBus mediation.

It’s not that complicated. There are only a few main namespaces. Mount, PID, net, IPC, UTS, cgroup and user namespaces. There’s also a time namespace in some very recent Linux version but nothing uses that yet.

The short answer to all of those responses is that I think you’re underestimating how hard it is to do any of that in any useful way, and still keep the system remotely usable even for sophistacted users, let alone naive ones.

The various X security extensions have always failed to get adoption because they break behavior the user expects, and even make programs crash outright. I don’t see how xpra is likely to be very different. Adding D-Bus mediation is trivial, but developing the policy for what messages to actually let through is not trivial. There aren’t very many name spaces in Linux, but deciding what specific items needs to be shared into other processes’ name spaces is still a hard problem… assuming that you can even share them at all at the granularity you need.

In all of these things, the sharing you need to enable is very close to the sharing that will destroy a lot of the security. There are a million details to think about, and any little mistake can hose you.

Also, don’t forget that isolating applications in the sense of programs has serious limitations from the beginning. The user is probably going to use a lot of the programs on the system to process information that’s effectively in “security compartment” X… and a lot of the same programs to process information in compartment Y. To deal with that, you need to isolate different instances of the same program, depending on which data they’re handling. You need to do that without confusing the user beyond all sanity, and without being too vulnerable to likely user errors.

That’s all really, really hard if the programs themselves don’t know at least something about what you’re doing… which they don’t.

On traditional Linux, you’re coming in after the fact and trying to isolate a bunch of pre-existing programs that don’t expect to be isolated from each other, that try to cooperate using the very communication channels you have to sever, and that don’t expect to be responsible for maintaining any isolation at all between different things they themselves communicate with.

On Android, you’re dealing with a developer culture that’s, if anything, more concerned with circumventing the system’s isolation than with assisting it… and there’s still no concept of different instances of the same program. What Android’s application isolation is really trying to do is to let users decide which developers they trust with which data, which is different from generalized compartmentation.

I’m not saying you’ll get nothing out of the things you suggest, but you write as if you think you can get a lot with the resources you have, and I’m not seeing how.

How will sandbox-app-launcher handle use cases when you download or create a file, but you want to open it with another app later? Or when an app wants to open another app? I assume you intend this to work without modifying the apps themselves.

There is a world-writable directory, /shared, you can use to share files across sandboxes. Access to this directory can be configured by the permissions (read-only, read-write, or none at all).

That currently won’t work if the other app is sandboxed too. You’d need to open both apps separately.

That’s going to break a lot of apps that like to open other apps, like for example opening help in the web browser. Or apps that execute other programs to work.