Overview of Mobile Projects - That focus on either/and/or security, privacy, anonymity, source-available, Freedom Software.

This is a quick and dirty page. I don’t want to spend too much effort on it. It’s very low priority. Even fairly categorizing these projects can open a can of worms - also keep up to date. Better to add any disclaimer on top as required.

Target audience are developers who are researching what kind of projects exist so they can decide which ones to work on or fork as well as advanced users. All of them need to take the information with caution and research independently on their own if the provided information (is still) accurate of if they disagree for some reason.

Blanket disclaimer:
All statements are either false or incomplete.

1 Like

This should really be added to the wiki page. The CopperheadOS section is especially damaging.

Done.

Mobile Operating System Comparison: Difference between revisions - Whonix

1 Like

I previously didn’t reply since I am not that deep into Android. And it would have taken a long time to form an opinion reviewing your technical points.

Your technical arguments against even opt-in root decreasing security may be true. Likely, they are true. I will assume in my reply, that they are in fact true.

I think our disagreement is prioritization. You seem to prioritize security over user freedom. [1]

Here is the freedom restriction that I am seeing:
The 4 original essential software freedoms as defined by the Free Software movement are granted. However, since the inception of the 4 original essential software freedoms, other issues came up sometimes called tivoization, malicious feature, antifeature, tyrant software, treacherous computing or DRM (digital restrictions management).

Non-root enforcement can be considered a lesser form of tyrant software or antifeature since it doesn’t restrict flashing an alternative, but cripples the system in major ways. You might argue you can use userdebug version or software fork and compile a version that doesn’t do this, but then the networking effect and scale of a project becomes so great that rolling an own fork has negligible effects and upstream choices are the de-facto state of things.

Non-root enforcement is also similar to DRM. While DRM is about applications which don’t allow users to easily, freely copy data on their own devices, non-root enforcement here leads to users not able to backup/copy/migrate their application data from one phone to another. Either the application has a backup / data export feature or data is “trapped” inside the phone. Even with a app dependent app data backup feature, it’s better if users who choose so can get access to the raw data stored by the app for convenience (not using tons of different data export features rather than scripting backups, data export feature may be incomplete, analysis of app data by user).

Non-root enforcement also aids DRM enabled applications. If GrapheneOS gets more popular, perhaps picked up my phone manufacturers or resellers, mobile carriers it will be easy for application developers to utilize DRM. To prevent the user from accessing application data. Making phone work in the interest of the application developer rather than phone user.

Phone manufacturers or resellers, mobile carriers couldn’t be blamed for refusing root access. That would already be the GrapheneOS default. They could conveniently blame it on “security”. Some power users might be able to flash a root-enabled version but the effect would be negligible. In practice, this will result in a lot users having their freedom restricted.

[1] But even the security vs user freedom view is a false dichotomy. Bootloaders allow for flexibility to boot into a root-enabled mode. There could be a key combination and/or boot menu which allows users to boot into root-enabled mode. There could be timeouts [user has to wait 5 seconds before proceeding a anti-root warning] / strong warnings. Booting into root-enabled mode could make subsequent boots into non-root enabled mode show a warning that the device may be compromised due to previous boot into root-enabled mode.

The question is rather, how much time/effort/money would be required to grant user freedom (root) in a secure way (such as alternative boot options)? If you were offered 1 million USD and had time, could you implement root access in a secure way? This is unrealistic and just and example to encourage imagination of solutions. Is this really a question of unsolvable security issues vs user freedom? Or is it rather prioritization effort/time/money required to implement user freedom (root) vs other goals (just don’t prioritize on user freedom, make something work for novice users, more quickly monetize (understood, we all need to eat)).

That’s not true. If users want root, they can unlock their bootloader and make whatever modifications they want. Users that don’t need root shouldn’t have it exposed at all.

Some manufacturers don’t allow unlocking the bootloader but that’s the fault of the manufacturer, not Android. Would it be the fault of Whonix if someone made a fork which ripped out the superroot boot mode?

You can’t allow root in production if you want security. If you allow unrestricted root, the attacker will just go for that.

madaidan via Whonix Forum:

That’s not true.

I don’t think that statement can be shrunk to a single line and I think
I’ve already elaborated why I came to that conclusion.

Some manufacturers don’t allow unlocking the bootloader but that’s the fault of the manufacturer, not Android. Would it be the fault of Whonix if someone made a fork which ripped out the superroot boot mode?

No, because Whonix implemented superroot and someone else ripped that
out. Can’t be responsible for action of third party and neither are any
actions by Whonix useful to justify that. Would be different if Whonix
didn’t implement superroot by default. Plus I would criticize that and
argue for superroot boot mode.

You can’t allow root in production if you want security.

I don’t understand that premise. bootloaders allow for that flexibility.

If users want root, they can unlock their bootloader and make whatever
modifications they want. Users that don’t need root shouldn’t have it
exposed at all.

My previous post describes how bootloaders allow for that flexibility.

It’s a question of usability vs freedom vs security vs priorities vs
economical realities.

About how hard will it be to gain root. Unlock bootloader (more
difficult) or key combination during bootloader (easier). It’s very
similar just combination during bootloader might not be implemented due
to low priority.

If you allow unrestricted root, the attacker will just go for that.

Meaning, it needs to have bad usability, otherwise users will do it?

Android implemented unlocking the bootloader and someone else ripped that out.

https://android.googlesource.com/platform/external/avb/+/master/README.md#Locked-and-Unlocked-mode

It’s not that simple. One of verified boot’s main purposes is protection from physical attacks. If you allow someone to just boot into a different mode with root without unlocking the bootloader, the physical security is nil.

1 Like

Simple: no
Doable: yes

No, it’s not doable without worsening security:

If you allow someone to just boot into a different mode with root without unlocking the bootloader, the physical security is nil.

Can still have same physical security.

Looks doable to me in principle. Looking at some arbitrary unblock bootloader guide. The procedure:

Steps include, simplified:

  • do something on your computer - skipable [1]
  • boot in normal mode and enable something (USB debugging and OEM unlock) in android settings (usability still good if this is kept)
  • connect USB - skipable [1]
  • do something on your computer - skipable [1]
  • automatic factory reset (I don’t see need for this except DRM which is a bad reason.)

I don’t see why this procedure couldn’t be simplified in principle.

  1. boot in normal mode and enable something
  2. reboot
  3. bootloader now allows to boot into root mode

It’s an usability enhancement and unrelated to security.

Also for physical security root mode could require (a special) PIN code or whatever. (What’s good enough physical for normal boot is also good enough for root enabled boot.)


[1] This an certainly be skipped, doesn’t provide security (rubber ducky).

It’s necessary to prevent the attacker from getting your data. It’s a security feature and has nothing to do with DRM. Would you rather an attacker unlock your bootloader, modify the system and make a rootkit to steal your data once it’s decrypted?

It is related to security. A physical attacker can easily gain root and access everything.

With the way it is now, the attacker has to compromise the device remotely first to enable OEM unlocking and then have physical access to the device to unlock the bootloader which then wipes all user data so they can’t access anything.

1 Like

Local or remote attacker?

If user choose to to enable root mode and choose root boot in boot loader, I don’t see how any attacker has any advantage.

A local attacker can try to boot into either non-root or root mode. Either way, there is encryption / access controls. I don’t see why in one case encryption / access controls are weaker.

I am not convinced this dilemma exists.

I don’t see why Android couldn’t do similar as planned in multiple boot modes for better security: persistent user | live user | persistent secureadmin | persistent superadmin | persistent recovery mode - #32 by Patrick. (That concept is generic. Works for both, hosts and VMs.) Seems like saying “when booting into superadmin delete /home/user first”. When using full disk encryption (FDE) it doesn’t matter which boot mode is used. If local attacker doesn’t know password, it’s considered secure.

I don’t see how Debian based vs Android based changes something conceptually.

Current model:

  1. “the attacker has to compromise the device remotely first to enable OEM unlocking”
  2. “then have physical access to the device to unlock the bootloader which then wipes all user data so they can’t access anything.”

Comments:

  1. If attacker can remotely enable OEM unlocking they already do have root access? Otherwise how could a remote attacker enable OEM unlocking?
  2. Is irrelevant due to 1).

Only a local attacker can unlock the bootloader.

Encryption doesn’t cover everything such as the bootloader. If you allow the attacker to unlock the bootloader without wiping user data, the attacker can just replace the bootloader with a malicious one that uploads all of your data the next time you decrypt it.

Encryption + verified boot is necessary for any meaningful physical security.

We don’t have local attackers in our threat model. Android does.

They don’t need the password to modify unencrypted parts.

The purpose of verified boot is to prevent the attacker from persisting as root. It doesn’t matter if the attacker has unlimited capabilities, verified boot will still revert their changes.

1 Like

I see. For phones, that’s because of “BIOS” is root of trust is read-only in hardware, which verifies the first stage, the bootloader which then verifies in a chain kernel and so forth.

Verified boot can work in principle also on computers using SecureBoot and/or heads. It uses a TPM and measured boot. Doesn’t extend to user space but that’s just a lack of implementation rather than conceptual impossibility.

GitHub - linuxboot/heads: A minimal Linux that runs as a coreboot or LinuxBoot ROM payload to provide a secure, flexible boot environment for laptops and servers.

Example implementation (see “boot integrity”):
https://insurgo.ca

I am writing this to show that I don’t see the conceptual differences between computer hardware and phone hardware which somehow introduce a (conceptual) difference which in one case allows for secure root-enabled boot mode without user data wiping (computer) but not in another case (android) that’s not possible.

Indeed but there’s also no strong reason why bootloader contents would have to remain secret.

That’s a rather complex attack. Similar to installing a hardware keylogger, microphone (can guess keystrokes) and/or miniature camera during the absence of the victim into/near a computer.

Also “just replace the bootloader” sounds simple but for phones there are many phones where people would like to unlock the bootloader but that freedom is refused by the device vendor. For many phones, mortals cannot replace the bootloader.

That feature might have a point. But should have an opt-in or opt-out. (Probably opt-out since it’s default already.) If it’s possible to securely implement booting into normal boot mode, enabling OEM unlocking in android settings, then it conceptually must also be possible to add an option to disable “wipe user data when bootloader gets unlocked”.

Let’s consider local attackers. If figured out it’s infeasble it can be ignored but would be good to at least consider and describe why that is. heads / insurgo make that seem realistic to cover some local threat models.

For that purpose, I’ve refined (better wrote down the concept) of multiple boot modes just now.

Agreed but I don’t see how that’s related.
Modified unencrypted parts would be hopefully caught by verified boot implementation.

That is independent of let’s say app X vs app Y is installed. I.e. it doesn’t matter if the fully booted android allows root or not.

The only thing required in step 1) is enable OEM unlocking. Remote root compromise (suppose which would be undone by verified boot) to enable OEM unlocking would be the only thing the attacker would care about. The attack would succeed so far.

My premise is:
remote exploit that gained root on android → can enable OEM unlocking in android settings

If that is the only thing the attacker does… And accept premise verified boot will undo attacker root… Then still, after reboot OEM unlocking is still activated. Now back to what you said…

And again my summary / rewrite of the current model into steps.

  1. in this threat model suppose a root exploit gained root temporarily and enabled OEM unlocking
  2. then have physical access

Again, the current model:

As I wrote before:

I don’t see why this procedure couldn’t be simplified in principle. Slightly refined:

  • boot in normal mode and enable something
  • reboot
  • bootloader now allows to boot into root mode (require physical button press)

What you said “With the way it is now, the attacker has to compromise the device remotely first to enable OEM unlocking and then have physical access to the device to unlock the bootloader which then wipes all user data so they can’t access anything.” is still the same under my proposed model. Just that has better usability, because less steps required.

Not really. Even if you think modifying the bootloader is too complex, you can modify something simpler like the base system. Encryption on android only covers the userdata partition because there’s no point in encrypting the others when they’re covered by verified boot and hold no sensitive information.

How do you make it opt-out while not allowing the attacker to opt-out?

Local attackers are infeasible in the VMs which is what I meant since we can’t secure the host from a VM. If we’re talking about Whonix Host/Kicksecure on bare metal, then we can have physical security with encryption and verified boot.

We’re talking about allowing the user to unlock the bootloader without it wiping your data. If you unlock the bootloader, verified boot is disabled.

It’s not the same because it doesn’t wipe user data. If the attacker has compromised the device remotely to enable something and then has physical access, they can boot into root mode and gain root.

madaidan via Whonix Forum:

Not really. Even if you think modifying the bootloader is too complex, you can modify something simpler like the base system.

Same difficulty. The internal storage drive of an android device cannot
be easily removed. Without booting the device and going through
permitted APIs, no file can be removed. In comparison to computers, hard
drives can be unplugged, plugged into another computer and modified
there. Maybe can be de-soldered?

This is btw also a data recovery and security disadvantage of phones:
internal storage drive cannot be removed my mortals. Hence not mounted
in a device (computer or so) considered secure. Therefore hard to do
data recovery and checking for malware.

I’ve also read somewhere internal storage drive of an android device is
“married” to hardware, disk encryption. I.e. when desoldered and mounted
elsewhere it cannot be decrypted because disk encryption key is
practically impossible to extract for mortals (similar to smartcard
chips). At most the system partition could be read but data in user
partition is inaccessible.

Encryption on android only covers the userdata partition because there’s no point in encrypting the others when they’re covered by verified boot and hold no sensitive information.

Mostly yes.

(I guess an argument could be made for encrypting system partition,
/boot (same as for Linux desktop) but maybe not a strong one, overrules
by other considerations and unrelated.)

How do you make it opt-out while not allowing the attacker to opt-out?

Same way you make an opt-in to USB debugging while not allowing the
attacker to opt-in USB debugging.

Same way you make an opt-in to OEM unlocking while not allowing the
attacker to opt-in OEM unlocking.

OEM unlocking / USB debugging are options where one has to boot into
normal mode first and then go to settings. More settings such as “do not
wipe user data when unlocking bootloader” could be added there.

Local attackers are infeasible in the VMs which is what I meant since we can’t secure the host from a VM.

Alright, sure.

If we’re talking about Whonix Host/Kicksecure on bare metal, then we can have physical security with encryption and verified boot.

Great.

We’re talking about allowing the user to unlock the bootloader without it wiping your data. If you unlock the bootloader, verified boot is disabled.

There’s two subjects here.

a) A simplified root opt-in.

b) A simplified root opt-in without wiping user data.

Subject a) is fully independent from subject b). But subject b) depends
on subject a). And certainly related.

For subject a), I don’t see why opt-in root enabled boot mode
necessities unlocking the bootloader or disabling verified boot.

Android might currently disable verified boot when unlocking bootloader,
but the current implementation legacy must not be a hindrance for future
designs proposals.

Considering a lot arbitrary, proprietary phones with google android. For
a lot of these phones there are - manufacture unwanted - root unlock
procedures available. In essence, these run a known exploit to gain root
temporarily which allows to install something to gain root permanently.
At no stage the bootloader or verified boot enters this concept.

On the contrary, there are many phones where rooting is possible while
bootloader unlock has not been accomplished by the modding community.

This is to show that bootloader / verified boot / rooting is all
somewhat independent. The only thing in my proposal that is different is
the way to accomplish the goal.

Either, run an (often closed source) exploit from an unknown source
somewhere from the internet to enable some setting in android or provide
simpler steps the user can use (boot into normal mode, opt-in some
settings, reboot, some key combination at boot).

It’s not the same because it doesn’t wipe user data.

If the attacker has compromised the device remotely to enable something and then has physical access, they can boot into root mode and gain root.

Suppose the attacker has (temporarily until reboot) root compromised the
device remotely and then has physical access, why is root even a
consideration once physical access is established? Most data by then is
accessible anyhow even without root.

Well, there might be one barrier. FDE and/or lockscreen. But a root
compromise would also leak FDE and/or lockscreen credentials?

But I don’t see much issue anyhow. When we start with “suppose the
attacker has (temporarily until reboot) root compromised”, well, then
most is lost already anyhow. After such as (temporarily until reboot)
root compromise most data can be extracted over remote from attacker
anyhow. Why one would bother with local access after remote root exploit
is unclear to me.

I sent this via email and it didn’t work so I’m copy-pasting that directly to here with whatever mangling happened due to email / MIME nonsense. Don’t feel like going through it all again and correcting it. A bunch of further posts were made here since the one that tagged me and I’m not particularly interested in reading through those or replying to them either. Hopefully whatever is pasted below is readable.

It seems like you haven’t really read through what I’ve written previously.
You don’t understand the security model and you’re unaware of the basic
functionality of the OS including backup support. You disregard and dismiss
that there’s an officially supported userdebug variant of the OS offering
root access in a sensible way. You make it seem as if it’s something which
has to be invented when it already exists and is officially supported. I
don’t understand why you have a problem with GrapheneOS offering both user
and userdebug builds. Most people in the GrapheneOS community want user
builds and it’s what fits the goals of the project. You say you believe in
software freedom but you don’t support people’s freedom to deliberately
choose to use an OS without root support. This improves their security at
the expense of not having a feature that a tiny minority of people would
ever use. For those people who want to use it, userdebug builds are
available as an option. It’s regularly tested and is officially supported
by us. Any issues specific to userdebug builds will be treated as a high
priority since it’s important to developers, although there have never been
any of those issues discovered. It isn’t something that’s neglected or not
properly supported. We fully support both user and userdebug builds. We do
not officially support eng builds, but that’s not relevant to the topic of
root access since userdebug builds provide it. The official tagged releases
have full support for userdebug builds and in fact it’s tested for every
release. It’s not something second class.

We would happily publish official userdebug builds using a different set of
signing keys. It would require doubling the space available on the update
server and more importantly doubling the time needed to make official
releases. The project’s resources are already stretched very thin both in
terms of money and development time. If you want us to make official
userdebug builds available, you’re welcome to donate the money to cover
upgrading the storage on the update server and a powerful local workstation
/ server to build the releases. I can cover the electricity costs out of
pocket and I’m even willing to spend the time building/maintaining the
setup and making the releases. I’ve talked about this multiple times and
there has never been interest from the community in making official
userdebug builds available. Providing the resources to do this is the
responsibility of people who want it. If you want it, it’s on you to make
it possible to do it. It’s your choice to spend your time attacking the
project by spreading misinformation and spin about it instead of helping us
get the resources to provide something that we’ve wanted to provide for
years. It would be helpful to offer userdebug builds so that people could
help us more with debugging issues without building the OS. People who want
root access for reasons aside from debugging could also use them. I don’t
see another use case for it, but if people want to use it they’re welcome
to do that. Our tagged releases already have fully tested official support
for userdebug builds. I test it for every single official release. The only
thing we don’t do is publish them particularly since that would mean
building for each device. GrapheneOS hardening results in more specialized
builds than AOSP so we aren’t able to reuse builds across similar devices
or use generic kernels / system images like AOSP. The only option is
building and publishing another release per supported device, doubling the
overall release engineering work. I think you’re misinterpreting our lack
of resources to do this as a decision not to provide these releases. That’s
not the case. Provide the funding and I’ll start making these releases as
soon as the parts for a new workstation arrive. I’ve already set things up
so that it can be done.

On Sat, 25 Apr 2020 at 08:35, Patrick via Whonix Forum <discourse@whonix.or=
g>
wrote:

April 25

thestinger:

It should be possible to implement this in a way so this won=E2=80=99t be
degrading the security of users who
would not use that option.

It=E2=80=99s fundamentally not possible to implement it in a way that doe=
sn=E2=80=99t
degrade the security of regular users.

It should be possible to implement this in a way so this won=E2=80=99t be
degrading the security of users who
would not use that option.

I previously didn=E2=80=99t reply since I am not that deep into Android. =
And it
would have taken a long time to form an opinion reviewing your technical
points.

Your technical arguments against even opt-in root decreasing security may
be true. Likely, they are true. I will assume in my reply, that they are =
in
fact true.

I think our disagreement is prioritization. You seem to prioritize
security over user freedom.

AOSP and GrapheneOS already support root access with the minimal possible
impact to security via userdebug builds with ro.adb.secure=3D1. A userdebug
build exists primarily to provide root access. It makes root available via
ADB, which requires physical access. If ro.adb.secure is enabled in the
build configuration, it makes it more like a user build by preserving the
physical security model for ADB by requiring that it be enabled within the
owner account along with requiring that the owner account approves the key
for a host to be able to use ADB access. These keys automatically expire on
a regular basis, so there is no such thing as truly persistent long-term
ADB access. I think the default expiry is something like 30 days but I
don’t feel like checking right now. We do not priority security over user
freedom. We have the best of both worlds via these build variants already.
A user build provides maximum security, a userdebug build with
ro.adb.secure=3D1 is insecure with control over the owner account OR
persistent state + physical access, a regular userdebug build is insecure
with physical access and an eng build enables further debug features for
development which are not desirable in production. A userdebug build with
ro.adb.secure=3D1 is pretty much a production build other than having root
access and other debugging features available via ADB. The other debugging
features are not particularly relevant to security since root access could
be used to do the same things.

Root access via control of the owner account OR physical access + an
exploit to enable ADB + whitelist the key causes substantial damage to the
security model of the OS and harms users by making them more vulnerable to
real world threats. It makes them far more vulnerable to compromises by
people they trust or through coercion. Some examples are an abusive
romantic partner or friend, law enforcement, violent criminals, etc. who
are now able to get root access via temporary control of the owner account.
That temporary control now allows them to obtain lots of data they
otherwise could not obtain including bypassing app locks features which
either do not use encryption (such as Signal’s app lock) or which are not
currently at rest (i.e. decryption has happened and the app currently has
access to the data). They can give the device back to the user and keep
root access until the device is rebooted. Even if the user is aware of the
benefit of rebooting, they could put software on the device to trick people
into thinking that the device has rebooted by mimicking the boot sequence.
Consider an abusive romantic partner using 1 minute of access to the owner
account to get root access until reboot, which they can now use to spy on
their partner in a way that they cannot detect even if they are suspicious
about it. The owner can look through all the installed apps, app
permissions and check for device management or accessibility services which
will give them a false sense of security but the device has been deeply
compromised and they are being watched. The only way to deal with this is
forcing an actual reboot by making sure to hold the power button even past
the point that the device appears to turn off since that could be the
attacker faking it. Even after doing that, there can be all kinds of nasty
things left persistent in persistent state compromising the security of
future usage of the device unless the user triggers a reset of persistent
state in the OS either via Settings or recovery. You can choose to ignore
these kinds of threat models, but they are not ignored by GrapheneOS. These
kinds of attacks are a much more realistic threat to the vast majority of
being than being remotely compromised via zero day exploits. It’s important
that we not only improve the security for these less interesting forms of
compromises but also work to improve it by providing more visibility and
review of installed apps and granted permissions / special permissions /
access. Root access ruins the security model on a deep level.

Persistent app-accessible root accessible is a completely different beast
completely ruining verified boot and adding a massive amount of attack
surface and I’m not going to go into that. It’s what people usually mean
when they talk about the OS being “rooted”. It isn’t what userdebug builds
provide and unlike userdebug is not something that we will ever officially
support or make available. A variant of GrapheneOS with that kind of root
access does not and will not exist. A variant with root access does exist:
userdebug builds of GrapheneOS have stable releases and are fully
supported. It isn’t what most of our users want and only a few people are
actually using userdebug builds on their devices, but you cannot pretend
that it doesn’t exist. I have multiple devices running userdebug builds of
the most recent stable tag or of the development branch. My personal device
for normal usage runs a user build and does not / will not have ADB access
or other development options enabled either (ADB access is available in
user builds, just not root access or bypassing whether apps have the debug
flag set).

The build type wanted by the vast majority of GrapheneOS users on
their devices (including myself) is a user build. The vast majority of
users (99.9%+) do not have any use case for a userdebug build and would
never take advantage of it. In reality, it has very little use case aside
from OS development. It isn’t needed for app development and would rarely
ever be useful for that. People using userdebug builds would only serve to
harm them, with very few exceptions. They are available as an option to
people working on OS development or who want root access for some other
reason including their religious beliefs about software and licensing, but
those users are not typically interested in GrapheneOS in the first place
and have largely worked to cause harm to GrapheneOS and the community by
spreading misinformation and attacking it. I think it should be obvious why
our official builds are the build type wanted by the vast majority of the
GrapheneOS community. They would be upset if we only provided user builds.
If we did provide userdebug builds, they would need to be signed with a
different set of signing keys and those new verified boot keys would need
to be added to Auditor and AttestationServer with ‘GrapheneOS insecure
debug build’ shown as the OS instead of ‘GrapheneOS’. Root access could be
used to bypass the attestation security model and the assurance provided by
Auditor / AttestationServer would be lower without making sure there is no
attached USB device and then forcing a hard reboot by holding the power
button until the device forcibly shuts down. It’s important that people
hold it long enough and do not get tricked by an attacker pretending to
reboot, which is a serious problem with trying to use rebooting as a
workaround for it. For example if you need to hold it for 10 seconds, the
attacker can fake a reboot at 8 seconds and most people aren’t going to
notice. Requiring a reboot for Auditor to work properly makes it
substantially less usable and there isn’t a comparable workaround for
AttestationServer since the whole point is opting into automatic scheduled
verification instead of manual verification.

The 4 original essential software freedoms as defined by the Free
Software movement are granted. However, since the inception of the 4
original essential software freedoms, other issues came up sometimes called
tivoization, malicious feature, antifeature, tyrant software, treacherous
computing or DRM (digital restrictions management).

I don’t have any iota of interest in a religion/ideology built around
software and software licensing. My experience with Free Software
ideologues is that they’re dishonest, manipulative and have gone out of the
way to cause harm to myself and GrapheneOS through spreading
misinformation. You folks go out of the way to cause harm and do not think
about things in a rational or reasonable way. The FSF is fine with
proprietary firmware/hardware as long as it cannot be updated. If you
prevent updating proprietary firmware, it doesn’t violate their rules. The
entire OS could be treated as firmware that cannot be updated too,
providing a completely locked down appliance with no security updates and
which conforms to the FSF rules. That’s exactly the kind of path being
taken by certain people to create a FSF approved mobile device. Sorry, but
none of this makes any sense to me and is just a bunch of silly semantic
games rather than anything to do with privacy, security or even user
freedom. It’s not relevant to the real world and I’m not interested in a
discussion based on irrational religious beliefs.

We officially support userdebug builds of GrapheneOS and you should not be
claiming that we do don’t. We do not support persistent or app-accessible
root and never will do that. The only form of root access that will ever be
supported is temporary root access by the owner of the device until reboot
on userdebug builds. We already provide this, and don’t pretend that it
doesn’t have severe consequences. It makes no sense for people to pay the
cost of those consequences when they aren’t ever going to use it and have
no use case for it, which is why it’s limited to userdebug builds. We
officially support it and it’s available + tested for each stable release.
What you are claiming doesn’t add up at all. If you want us to provide
official userdebug builds, provide the necessary funding for a workstation
and server storage. You cannot reasonably complain that we do not offer
them when you folks do not make it possible for us to offer them despite us
wanting to do it. It is not my fault that the few people who want this are
unwilling to support it. The vast majority of people do not want it and we
cannot reasonably use their donations to provide it. It would not be what
they donated to support. The tiny niche of people who want official
userdebug builds would need to put together the resources for us to build
and host the releases or stick to doing it themselves which a couple people
are already doing. They are free to publish their builds for others. I’ve
put in a huge amount of work to have an extremely well documented build
process along with it being incredibly easy to deploy over-the-air updates
including delta updates to minimize bandwidth usage. I’ve put a lot of work
into writing and publishing scripts to make everything easier including
managing signing keys encrypted with scrypt + AES, fully signing releases
with those, etc. along with making sure builds are reproducible and fixing
issues with that.

Non-root enforcement can be considered a lesser form of tyrant software o=
r
antifeature since it doesn=E2=80=99t restrict flashing an alternative, bu=
t cripples
the system in major ways. You might argue you can use userdebug version o=
r
software fork and compile a version that doesn=E2=80=99t do this, but the=
n the
networking effect and scale of a project becomes so great that rolling an
own fork has negligible effects and upstream choices are the de-facto sta=
te
of things.

You’re trying to misrepresent a userdebug as comparable to a fork or
modification of the OS. It’s an official supported build variant of
GrapheneOS. It uses exactly the same sources without any modification and
does not have any negative network effect. The scale of the ‘project’ is
non-existent since it already exists and officially supported. You just
don’t like that it isn’t what most people want and don’t want to admit that
it has serious downsides. We’re already having to consider dropping devices
and scaling back aspects of the project even without doubling the amounts
of builds. If the few people who want this won’t provide funding, how do
you expect it to happen? Either people need to provide funding for us to do
it or they need to do it themselves. It’s already officially supported and
the only thing that needs to be done is building a debug variant of the
entire OS for each supported device. I’d need a whole new local
workstation/server support that without disrupting development and even
then I’d still need to invest my time in building and maintaining another
workstation and dealing with making these builds on it. I am willing to
invest my time in that but I’m not spending a bunch of money building a
powerful server to build a dozen extra builds of the OS in a reasonable
amount of time.

Non-root enforcement is also similar to DRM. While DRM is about

applications which don=E2=80=99t allow users to easily, freely copy data =
on their
own devices, non-root enforcement here leads to users not able to
backup/copy/migrate their application data from one phone to another.
Either the application has a backup / data export feature or data is
=E2=80=9Ctrapped=E2=80=9D inside the phone. Even with a app dependent app=
data backup
feature, it=E2=80=99s better if users who choose so can get access to the=
raw data
stored by the app for convenience (not using tons of different data expor=
t
features rather than scripting backups, data export feature may be
incomplete, analysis of app data by user).

Maybe you should educate yourself about Android including backup services,
ADB and userdebug builds before making outlandish claims about it. AOSP and
GrapheneOS have an official OS backup mechanism available via both ADB and
backup services integrated into the OS. GrapheneOS includes Seedvault, but
the backup functionality is available via ADB either way.

Non-root enforcement also aids DRM enabled applications. If GrapheneOS
gets more popular, perhaps picked up my phone manufacturers or resellers,
mobile carriers it will be easy for application developers to utilize DRM=
.
To prevent the user from accessing application data. Making phone work in
the interest of the application developer rather than phone user.

A fork of GrapheneOS signed with different keys is not GrapheneOS and is
not something we can control short of not using the Free Software licenses
that you hold so dear. The only solution to people forking the project and
using it in ways that we don’t want is disallowing it in the licensing
which you would be against. Do you want us to forbid commercial usage of
the OS again?

It sounds like you have a problem with the consequences of truly free
software which includes being able to turn it into proprietary software or
using it to create locked down systems. That’s unlike non-free licenses
like GPLv3 which forbid certain kinds of products. GPLv2-only licensing
forbids mixing it with GPLv3 code. The license is incompatible with ITSELF.
Linux kernel code cannot be included in GPLv3 projects. GPLv3 code cannot
be included in the Linux kernel. That is a severe restriction freedom with
serious real world consequences. The restrictions on freedom by the GPLv2
and GPLv3 also prevent including the code in projects which cannot or will
not make those concessions, so code that is locked up as GPLv2/GPLv3 cannot
be sent back upstream to projects like OpenBSD. Forking permissively
licensed code and locking it down as GPLv3 is something that I consider bad
behavior and it’s certainly in opposition to wanting to be a good citizen
and contributing whatever we can back upstream. OpenBSD considers GPL to be
a non-free license and will not include GPLv2 code whenever they can avoid
it. They absolutely will not ever include GPLv3 code as they consider it
completely non-free. These definitions of freedom are very subjective and
most of the world does not agree with your extreme ideological views.
Freedom includes the freedom to make a locked down, highly secure device
significantly more resistant to compromise. It includes the freedom to
support and use features like verified boot, which is also counter to that
ideology.

Software that moved to GPLv3 was entirely replaced nearly everywhere and
most GPLv2 software other than Linux has better alternatives. Linux itself
has horrible safety/robustness/security with no realistic way of fixing it
and a trajectory headed for making it far worse over time, so incrementally
or outright replacing it is important. That will be the end of the normal
GPL licenses in most places. What’s the justification for the GPL
restricting freedom when it hasn’t actually worked and has deterred people
from using that software and driven them towards freer licenses like MIT?
It sounds like you want an even more restricted license if you expect DRM
to be forbidden. The only way of forbidding those things is restricting the
freedom to do it like GPLv3 but actually beyond GPLv3 with clauses that
would be incompatible with it since they’d have to go further… maybe that
is exactly what the GPLv4 will do and even fewer people will use it, and it
will drive people away from using GPLv2/GPLv3 even further just like GPLv3
did to GPLv2.

Phone manufacturers or resellers, mobile carriers couldn=E2=80=99t be bla=
med for
refusing root access. That would already be the GrapheneOS default. They
could conveniently blame it on =E2=80=9Csecurity=E2=80=9D. Some power use=
rs might be able
to flash a root-enabled version but the effect would be negligible. In
practice, this will result in a lot users having their freedom restricted=
.

A fork of GrapheneOS signed with different keys is not GrapheneOS so you
are not talking about GrapheneOS anymore but rather something else not
relevant to the discussion. I love the manipulative scare quotes around
security even though the security issues it causes are very real and very
impactful. It’s far more relevant than zero day exploitation for most users=
.

[1] But even the security vs user freedom view is a false dichotomy.
Bootloaders allow for flexibility to boot into a root-enabled mode. There
could be a key combination and/or boot menu which allows users to boot in=
to
root-enabled mode. There could be timeouts [user has to wait 5 seconds
before proceeding a anti-root warning] / strong warnings. Booting into
root-enabled mode could make subsequent boots into non-root enabled mode
show a warning that the device may be compromised due to previous boot in=
to
root-enabled mode.

It isn’t a false dichotomy. I suggest reading through what I’ve written,
doing your research and actually putting in some effort to understand the
security model. There are already userdebug builds, i.e. a root enabled
mode requiring enabling an option within the owner account AND having
physical access to the device at the time that it’s used. It is ‘temporary’
in that the option to use it remains available but the access itself goes
away on reboot - but not whatever changes were made to persistent state
using that access.

The question is rather, how much time/effort/money would be required to
grant user freedom (root) in a secure way (such as alternative boot
options)? If you were offered 1 million USD and had time, could you
implement root access in a secure way? This is unrealistic and just and
example to encourage imagination of solutions. Is this really a question =
of
unsolvable security issues vs user freedom? Or is it rather prioritizatio=
n
effort/time/money required to implement user freedom (root) vs other goal=
s
(just don=E2=80=99t prioritize on user freedom, make something work for n=
ovice
users, more quickly monetize (understood, we all need to eat)).

There is already official support for userdebug builds in GrapheneOS.
People are free to use those but it isn’t what the vast majority of the
GrapheneOS community wants to use. It has hardly any use case aside from OS
development. Give me 20k USD in funding and I will publish official
userdebug builds signed with different keys alongside the regular releases
for 2 years after I get the parts to build a new workstation/server, etc. I
cannot do it without build hardware and server space for it. It could also
just go on a separate update server.

Calm down, no need to get worked up.

It’s your choice to spend your time attacking the project by spreading misinformation and spin about it

You’re trying to misrepresent

etc.

Newsflash: different people, different opinions.

Are you going to apologize? No.

The argument got heated unfortunately and the mobile section is outdated.

Although it is hosted at kicksecure now, the thread is here, so maybe better to continue here.

I know this is an old thread, but that page needs some reviews.

I could be a mediator (I use graphenos, but not a dev)… anyway, after two years nobody is angry anymore and possibly some stances have changed.

I’m not an android or mobile ps developer, just sharing what I know.

A lot of the points below are just simplifications of the post above, just to not repeat the same text and be more as a summary.

I will compare whonix design to gos to clarify to Patrick some points, although they are distinct systems and security features, I will use this method so it can be better understood, not that the options presented are equivalent.

So let’s see:

Sections: separate into recommended and not recommended OSes.

Madaidam agrees for clarity.
Patrick’s disagrees because it is just a page for listing mobile projects.
I agree because I don’t think just mentioning OSes and some sources helps making a good decision, needs more detailed information.
If grapheneos gives easy to use root, then attackers will have the same.

I would leave GrapheneOS, Android and Iphone on the recommended list for security. One can only be private if they are secure.

The rest of the projects I’d put then in the “avoid” section, as they do not implement modern security features, sometimes even disable it.

Root

  • Argues that allowing users to gain root (superuser) access would inevitably break the security model and that there is no conceivable solution that can uphold both user security and freedom.
  • Potential Conflict of Interest. If GrapheneOS wouldn’t disable easy to use technical ways that most laymen users can use to gain root and/or to keep control over the software running on their devices, then GrapheneOS’s chances to be ever get a highly profitable hardware producer partnership would be severely diminished.

User don’t need root and is very much recommended to not use it. Legacy applications required root to do things that now lesser privileged permissions can do.
If root is allowed on the distributed build, then devices would be prone to a great variety of attacks.

Also userdebug builds mentioned by Daniel above, it takes development time and anyone can do it. Would be the same thing as asking to ship an I2P ws and gateway.

Just an example, lets do “s/root/netvm/g”
AOSP user should use application without having root.
Whonix user should use applications in the WS, without having control of the netvm.

AOSP user should use the user space for applications, and without root, they can not do nasty things, or even nastier.
Whonix user should use the workstation space for applications, and without access to the gateway they are safe.

AOSP user needs to build to get root.
Whonix user needs access to the gateway to change the gateway.

AOSP and Whonix are not restricting freedom, they are enforcing security, on AOSP Root through not adding it, and on Whonix through isolating the machines.

I understand that a Whonix user can edit the GW easily, and GOS root is not easy, needs build. But that is security stance.

Verified boot

Full verified boot which would be great if the key would be held by users and encouraged through a first start process or similar instead of held by the developer.

The user would need to sign build and sign the releases.
But here is the thing, this is not a GOS issue, nor a AOSP issue, this is a feature that can be used by anyone that builds the program. GOS is using the verified boot but it needs a signing key and a repo distributor, and that is the project contributors.

On a linux desktop such as debian we only need to download a signing key and change a sources list entry, the same way that is easy, it is much worse for security.

So it is not restricting the user, they have build instructions Build | GrapheneOS.
Also, for AOSP, having an entry on the settings to add a new repo would be insufficient, someone has to sign firmware for verified boot to work.