Overview of Mobile Projects - That focus on either/and/or security, privacy, anonymity, source-available, Freedom Software.

Android implemented unlocking the bootloader and someone else ripped that out.

https://android.googlesource.com/platform/external/avb/+/master/README.md#Locked-and-Unlocked-mode

It’s not that simple. One of verified boot’s main purposes is protection from physical attacks. If you allow someone to just boot into a different mode with root without unlocking the bootloader, the physical security is nil.

1 Like

Simple: no
Doable: yes

No, it’s not doable without worsening security:

If you allow someone to just boot into a different mode with root without unlocking the bootloader, the physical security is nil.

Can still have same physical security.

Looks doable to me in principle. Looking at some arbitrary unblock bootloader guide. The procedure:

Steps include, simplified:

  • do something on your computer - skipable [1]
  • boot in normal mode and enable something (USB debugging and OEM unlock) in android settings (usability still good if this is kept)
  • connect USB - skipable [1]
  • do something on your computer - skipable [1]
  • automatic factory reset (I don’t see need for this except DRM which is a bad reason.)

I don’t see why this procedure couldn’t be simplified in principle.

  1. boot in normal mode and enable something
  2. reboot
  3. bootloader now allows to boot into root mode

It’s an usability enhancement and unrelated to security.

Also for physical security root mode could require (a special) PIN code or whatever. (What’s good enough physical for normal boot is also good enough for root enabled boot.)


[1] This an certainly be skipped, doesn’t provide security (rubber ducky).

It’s necessary to prevent the attacker from getting your data. It’s a security feature and has nothing to do with DRM. Would you rather an attacker unlock your bootloader, modify the system and make a rootkit to steal your data once it’s decrypted?

It is related to security. A physical attacker can easily gain root and access everything.

With the way it is now, the attacker has to compromise the device remotely first to enable OEM unlocking and then have physical access to the device to unlock the bootloader which then wipes all user data so they can’t access anything.

1 Like

Local or remote attacker?

If user choose to to enable root mode and choose root boot in boot loader, I don’t see how any attacker has any advantage.

A local attacker can try to boot into either non-root or root mode. Either way, there is encryption / access controls. I don’t see why in one case encryption / access controls are weaker.

I am not convinced this dilemma exists.

I don’t see why Android couldn’t do similar as planned in multiple boot modes for better security: persistent user | live user | persistent secureadmin | persistent superadmin | persistent recovery mode - #32 by Patrick. (That concept is generic. Works for both, hosts and VMs.) Seems like saying “when booting into superadmin delete /home/user first”. When using full disk encryption (FDE) it doesn’t matter which boot mode is used. If local attacker doesn’t know password, it’s considered secure.

I don’t see how Debian based vs Android based changes something conceptually.

Current model:

  1. “the attacker has to compromise the device remotely first to enable OEM unlocking”
  2. “then have physical access to the device to unlock the bootloader which then wipes all user data so they can’t access anything.”

Comments:

  1. If attacker can remotely enable OEM unlocking they already do have root access? Otherwise how could a remote attacker enable OEM unlocking?
  2. Is irrelevant due to 1).

Only a local attacker can unlock the bootloader.

Encryption doesn’t cover everything such as the bootloader. If you allow the attacker to unlock the bootloader without wiping user data, the attacker can just replace the bootloader with a malicious one that uploads all of your data the next time you decrypt it.

Encryption + verified boot is necessary for any meaningful physical security.

We don’t have local attackers in our threat model. Android does.

They don’t need the password to modify unencrypted parts.

The purpose of verified boot is to prevent the attacker from persisting as root. It doesn’t matter if the attacker has unlimited capabilities, verified boot will still revert their changes.

1 Like

I see. For phones, that’s because of “BIOS” is root of trust is read-only in hardware, which verifies the first stage, the bootloader which then verifies in a chain kernel and so forth.

Verified boot can work in principle also on computers using SecureBoot and/or heads. It uses a TPM and measured boot. Doesn’t extend to user space but that’s just a lack of implementation rather than conceptual impossibility.

GitHub - linuxboot/heads: A minimal Linux that runs as a coreboot or LinuxBoot ROM payload to provide a secure, flexible boot environment for laptops and servers.

Example implementation (see “boot integrity”):
https://insurgo.ca

I am writing this to show that I don’t see the conceptual differences between computer hardware and phone hardware which somehow introduce a (conceptual) difference which in one case allows for secure root-enabled boot mode without user data wiping (computer) but not in another case (android) that’s not possible.

Indeed but there’s also no strong reason why bootloader contents would have to remain secret.

That’s a rather complex attack. Similar to installing a hardware keylogger, microphone (can guess keystrokes) and/or miniature camera during the absence of the victim into/near a computer.

Also “just replace the bootloader” sounds simple but for phones there are many phones where people would like to unlock the bootloader but that freedom is refused by the device vendor. For many phones, mortals cannot replace the bootloader.

That feature might have a point. But should have an opt-in or opt-out. (Probably opt-out since it’s default already.) If it’s possible to securely implement booting into normal boot mode, enabling OEM unlocking in android settings, then it conceptually must also be possible to add an option to disable “wipe user data when bootloader gets unlocked”.

Let’s consider local attackers. If figured out it’s infeasble it can be ignored but would be good to at least consider and describe why that is. heads / insurgo make that seem realistic to cover some local threat models.

For that purpose, I’ve refined (better wrote down the concept) of multiple boot modes just now.

Agreed but I don’t see how that’s related.
Modified unencrypted parts would be hopefully caught by verified boot implementation.

That is independent of let’s say app X vs app Y is installed. I.e. it doesn’t matter if the fully booted android allows root or not.

The only thing required in step 1) is enable OEM unlocking. Remote root compromise (suppose which would be undone by verified boot) to enable OEM unlocking would be the only thing the attacker would care about. The attack would succeed so far.

My premise is:
remote exploit that gained root on android → can enable OEM unlocking in android settings

If that is the only thing the attacker does… And accept premise verified boot will undo attacker root… Then still, after reboot OEM unlocking is still activated. Now back to what you said…

And again my summary / rewrite of the current model into steps.

  1. in this threat model suppose a root exploit gained root temporarily and enabled OEM unlocking
  2. then have physical access

Again, the current model:

As I wrote before:

I don’t see why this procedure couldn’t be simplified in principle. Slightly refined:

  • boot in normal mode and enable something
  • reboot
  • bootloader now allows to boot into root mode (require physical button press)

What you said “With the way it is now, the attacker has to compromise the device remotely first to enable OEM unlocking and then have physical access to the device to unlock the bootloader which then wipes all user data so they can’t access anything.” is still the same under my proposed model. Just that has better usability, because less steps required.

Not really. Even if you think modifying the bootloader is too complex, you can modify something simpler like the base system. Encryption on android only covers the userdata partition because there’s no point in encrypting the others when they’re covered by verified boot and hold no sensitive information.

How do you make it opt-out while not allowing the attacker to opt-out?

Local attackers are infeasible in the VMs which is what I meant since we can’t secure the host from a VM. If we’re talking about Whonix Host/Kicksecure on bare metal, then we can have physical security with encryption and verified boot.

We’re talking about allowing the user to unlock the bootloader without it wiping your data. If you unlock the bootloader, verified boot is disabled.

It’s not the same because it doesn’t wipe user data. If the attacker has compromised the device remotely to enable something and then has physical access, they can boot into root mode and gain root.

madaidan via Whonix Forum:

Not really. Even if you think modifying the bootloader is too complex, you can modify something simpler like the base system.

Same difficulty. The internal storage drive of an android device cannot
be easily removed. Without booting the device and going through
permitted APIs, no file can be removed. In comparison to computers, hard
drives can be unplugged, plugged into another computer and modified
there. Maybe can be de-soldered?

This is btw also a data recovery and security disadvantage of phones:
internal storage drive cannot be removed my mortals. Hence not mounted
in a device (computer or so) considered secure. Therefore hard to do
data recovery and checking for malware.

I’ve also read somewhere internal storage drive of an android device is
“married” to hardware, disk encryption. I.e. when desoldered and mounted
elsewhere it cannot be decrypted because disk encryption key is
practically impossible to extract for mortals (similar to smartcard
chips). At most the system partition could be read but data in user
partition is inaccessible.

Encryption on android only covers the userdata partition because there’s no point in encrypting the others when they’re covered by verified boot and hold no sensitive information.

Mostly yes.

(I guess an argument could be made for encrypting system partition,
/boot (same as for Linux desktop) but maybe not a strong one, overrules
by other considerations and unrelated.)

How do you make it opt-out while not allowing the attacker to opt-out?

Same way you make an opt-in to USB debugging while not allowing the
attacker to opt-in USB debugging.

Same way you make an opt-in to OEM unlocking while not allowing the
attacker to opt-in OEM unlocking.

OEM unlocking / USB debugging are options where one has to boot into
normal mode first and then go to settings. More settings such as “do not
wipe user data when unlocking bootloader” could be added there.

Local attackers are infeasible in the VMs which is what I meant since we can’t secure the host from a VM.

Alright, sure.

If we’re talking about Whonix Host/Kicksecure on bare metal, then we can have physical security with encryption and verified boot.

Great.

We’re talking about allowing the user to unlock the bootloader without it wiping your data. If you unlock the bootloader, verified boot is disabled.

There’s two subjects here.

a) A simplified root opt-in.

b) A simplified root opt-in without wiping user data.

Subject a) is fully independent from subject b). But subject b) depends
on subject a). And certainly related.

For subject a), I don’t see why opt-in root enabled boot mode
necessities unlocking the bootloader or disabling verified boot.

Android might currently disable verified boot when unlocking bootloader,
but the current implementation legacy must not be a hindrance for future
designs proposals.

Considering a lot arbitrary, proprietary phones with google android. For
a lot of these phones there are - manufacture unwanted - root unlock
procedures available. In essence, these run a known exploit to gain root
temporarily which allows to install something to gain root permanently.
At no stage the bootloader or verified boot enters this concept.

On the contrary, there are many phones where rooting is possible while
bootloader unlock has not been accomplished by the modding community.

This is to show that bootloader / verified boot / rooting is all
somewhat independent. The only thing in my proposal that is different is
the way to accomplish the goal.

Either, run an (often closed source) exploit from an unknown source
somewhere from the internet to enable some setting in android or provide
simpler steps the user can use (boot into normal mode, opt-in some
settings, reboot, some key combination at boot).

It’s not the same because it doesn’t wipe user data.

If the attacker has compromised the device remotely to enable something and then has physical access, they can boot into root mode and gain root.

Suppose the attacker has (temporarily until reboot) root compromised the
device remotely and then has physical access, why is root even a
consideration once physical access is established? Most data by then is
accessible anyhow even without root.

Well, there might be one barrier. FDE and/or lockscreen. But a root
compromise would also leak FDE and/or lockscreen credentials?

But I don’t see much issue anyhow. When we start with “suppose the
attacker has (temporarily until reboot) root compromised”, well, then
most is lost already anyhow. After such as (temporarily until reboot)
root compromise most data can be extracted over remote from attacker
anyhow. Why one would bother with local access after remote root exploit
is unclear to me.

I sent this via email and it didn’t work so I’m copy-pasting that directly to here with whatever mangling happened due to email / MIME nonsense. Don’t feel like going through it all again and correcting it. A bunch of further posts were made here since the one that tagged me and I’m not particularly interested in reading through those or replying to them either. Hopefully whatever is pasted below is readable.

It seems like you haven’t really read through what I’ve written previously.
You don’t understand the security model and you’re unaware of the basic
functionality of the OS including backup support. You disregard and dismiss
that there’s an officially supported userdebug variant of the OS offering
root access in a sensible way. You make it seem as if it’s something which
has to be invented when it already exists and is officially supported. I
don’t understand why you have a problem with GrapheneOS offering both user
and userdebug builds. Most people in the GrapheneOS community want user
builds and it’s what fits the goals of the project. You say you believe in
software freedom but you don’t support people’s freedom to deliberately
choose to use an OS without root support. This improves their security at
the expense of not having a feature that a tiny minority of people would
ever use. For those people who want to use it, userdebug builds are
available as an option. It’s regularly tested and is officially supported
by us. Any issues specific to userdebug builds will be treated as a high
priority since it’s important to developers, although there have never been
any of those issues discovered. It isn’t something that’s neglected or not
properly supported. We fully support both user and userdebug builds. We do
not officially support eng builds, but that’s not relevant to the topic of
root access since userdebug builds provide it. The official tagged releases
have full support for userdebug builds and in fact it’s tested for every
release. It’s not something second class.

We would happily publish official userdebug builds using a different set of
signing keys. It would require doubling the space available on the update
server and more importantly doubling the time needed to make official
releases. The project’s resources are already stretched very thin both in
terms of money and development time. If you want us to make official
userdebug builds available, you’re welcome to donate the money to cover
upgrading the storage on the update server and a powerful local workstation
/ server to build the releases. I can cover the electricity costs out of
pocket and I’m even willing to spend the time building/maintaining the
setup and making the releases. I’ve talked about this multiple times and
there has never been interest from the community in making official
userdebug builds available. Providing the resources to do this is the
responsibility of people who want it. If you want it, it’s on you to make
it possible to do it. It’s your choice to spend your time attacking the
project by spreading misinformation and spin about it instead of helping us
get the resources to provide something that we’ve wanted to provide for
years. It would be helpful to offer userdebug builds so that people could
help us more with debugging issues without building the OS. People who want
root access for reasons aside from debugging could also use them. I don’t
see another use case for it, but if people want to use it they’re welcome
to do that. Our tagged releases already have fully tested official support
for userdebug builds. I test it for every single official release. The only
thing we don’t do is publish them particularly since that would mean
building for each device. GrapheneOS hardening results in more specialized
builds than AOSP so we aren’t able to reuse builds across similar devices
or use generic kernels / system images like AOSP. The only option is
building and publishing another release per supported device, doubling the
overall release engineering work. I think you’re misinterpreting our lack
of resources to do this as a decision not to provide these releases. That’s
not the case. Provide the funding and I’ll start making these releases as
soon as the parts for a new workstation arrive. I’ve already set things up
so that it can be done.

On Sat, 25 Apr 2020 at 08:35, Patrick via Whonix Forum <discourse@whonix.or=
g>
wrote:

April 25

thestinger:

It should be possible to implement this in a way so this won=E2=80=99t be
degrading the security of users who
would not use that option.

It=E2=80=99s fundamentally not possible to implement it in a way that doe=
sn=E2=80=99t
degrade the security of regular users.

It should be possible to implement this in a way so this won=E2=80=99t be
degrading the security of users who
would not use that option.

I previously didn=E2=80=99t reply since I am not that deep into Android. =
And it
would have taken a long time to form an opinion reviewing your technical
points.

Your technical arguments against even opt-in root decreasing security may
be true. Likely, they are true. I will assume in my reply, that they are =
in
fact true.

I think our disagreement is prioritization. You seem to prioritize
security over user freedom.

AOSP and GrapheneOS already support root access with the minimal possible
impact to security via userdebug builds with ro.adb.secure=3D1. A userdebug
build exists primarily to provide root access. It makes root available via
ADB, which requires physical access. If ro.adb.secure is enabled in the
build configuration, it makes it more like a user build by preserving the
physical security model for ADB by requiring that it be enabled within the
owner account along with requiring that the owner account approves the key
for a host to be able to use ADB access. These keys automatically expire on
a regular basis, so there is no such thing as truly persistent long-term
ADB access. I think the default expiry is something like 30 days but I
don’t feel like checking right now. We do not priority security over user
freedom. We have the best of both worlds via these build variants already.
A user build provides maximum security, a userdebug build with
ro.adb.secure=3D1 is insecure with control over the owner account OR
persistent state + physical access, a regular userdebug build is insecure
with physical access and an eng build enables further debug features for
development which are not desirable in production. A userdebug build with
ro.adb.secure=3D1 is pretty much a production build other than having root
access and other debugging features available via ADB. The other debugging
features are not particularly relevant to security since root access could
be used to do the same things.

Root access via control of the owner account OR physical access + an
exploit to enable ADB + whitelist the key causes substantial damage to the
security model of the OS and harms users by making them more vulnerable to
real world threats. It makes them far more vulnerable to compromises by
people they trust or through coercion. Some examples are an abusive
romantic partner or friend, law enforcement, violent criminals, etc. who
are now able to get root access via temporary control of the owner account.
That temporary control now allows them to obtain lots of data they
otherwise could not obtain including bypassing app locks features which
either do not use encryption (such as Signal’s app lock) or which are not
currently at rest (i.e. decryption has happened and the app currently has
access to the data). They can give the device back to the user and keep
root access until the device is rebooted. Even if the user is aware of the
benefit of rebooting, they could put software on the device to trick people
into thinking that the device has rebooted by mimicking the boot sequence.
Consider an abusive romantic partner using 1 minute of access to the owner
account to get root access until reboot, which they can now use to spy on
their partner in a way that they cannot detect even if they are suspicious
about it. The owner can look through all the installed apps, app
permissions and check for device management or accessibility services which
will give them a false sense of security but the device has been deeply
compromised and they are being watched. The only way to deal with this is
forcing an actual reboot by making sure to hold the power button even past
the point that the device appears to turn off since that could be the
attacker faking it. Even after doing that, there can be all kinds of nasty
things left persistent in persistent state compromising the security of
future usage of the device unless the user triggers a reset of persistent
state in the OS either via Settings or recovery. You can choose to ignore
these kinds of threat models, but they are not ignored by GrapheneOS. These
kinds of attacks are a much more realistic threat to the vast majority of
being than being remotely compromised via zero day exploits. It’s important
that we not only improve the security for these less interesting forms of
compromises but also work to improve it by providing more visibility and
review of installed apps and granted permissions / special permissions /
access. Root access ruins the security model on a deep level.

Persistent app-accessible root accessible is a completely different beast
completely ruining verified boot and adding a massive amount of attack
surface and I’m not going to go into that. It’s what people usually mean
when they talk about the OS being “rooted”. It isn’t what userdebug builds
provide and unlike userdebug is not something that we will ever officially
support or make available. A variant of GrapheneOS with that kind of root
access does not and will not exist. A variant with root access does exist:
userdebug builds of GrapheneOS have stable releases and are fully
supported. It isn’t what most of our users want and only a few people are
actually using userdebug builds on their devices, but you cannot pretend
that it doesn’t exist. I have multiple devices running userdebug builds of
the most recent stable tag or of the development branch. My personal device
for normal usage runs a user build and does not / will not have ADB access
or other development options enabled either (ADB access is available in
user builds, just not root access or bypassing whether apps have the debug
flag set).

The build type wanted by the vast majority of GrapheneOS users on
their devices (including myself) is a user build. The vast majority of
users (99.9%+) do not have any use case for a userdebug build and would
never take advantage of it. In reality, it has very little use case aside
from OS development. It isn’t needed for app development and would rarely
ever be useful for that. People using userdebug builds would only serve to
harm them, with very few exceptions. They are available as an option to
people working on OS development or who want root access for some other
reason including their religious beliefs about software and licensing, but
those users are not typically interested in GrapheneOS in the first place
and have largely worked to cause harm to GrapheneOS and the community by
spreading misinformation and attacking it. I think it should be obvious why
our official builds are the build type wanted by the vast majority of the
GrapheneOS community. They would be upset if we only provided user builds.
If we did provide userdebug builds, they would need to be signed with a
different set of signing keys and those new verified boot keys would need
to be added to Auditor and AttestationServer with ‘GrapheneOS insecure
debug build’ shown as the OS instead of ‘GrapheneOS’. Root access could be
used to bypass the attestation security model and the assurance provided by
Auditor / AttestationServer would be lower without making sure there is no
attached USB device and then forcing a hard reboot by holding the power
button until the device forcibly shuts down. It’s important that people
hold it long enough and do not get tricked by an attacker pretending to
reboot, which is a serious problem with trying to use rebooting as a
workaround for it. For example if you need to hold it for 10 seconds, the
attacker can fake a reboot at 8 seconds and most people aren’t going to
notice. Requiring a reboot for Auditor to work properly makes it
substantially less usable and there isn’t a comparable workaround for
AttestationServer since the whole point is opting into automatic scheduled
verification instead of manual verification.

The 4 original essential software freedoms as defined by the Free
Software movement are granted. However, since the inception of the 4
original essential software freedoms, other issues came up sometimes called
tivoization, malicious feature, antifeature, tyrant software, treacherous
computing or DRM (digital restrictions management).

I don’t have any iota of interest in a religion/ideology built around
software and software licensing. My experience with Free Software
ideologues is that they’re dishonest, manipulative and have gone out of the
way to cause harm to myself and GrapheneOS through spreading
misinformation. You folks go out of the way to cause harm and do not think
about things in a rational or reasonable way. The FSF is fine with
proprietary firmware/hardware as long as it cannot be updated. If you
prevent updating proprietary firmware, it doesn’t violate their rules. The
entire OS could be treated as firmware that cannot be updated too,
providing a completely locked down appliance with no security updates and
which conforms to the FSF rules. That’s exactly the kind of path being
taken by certain people to create a FSF approved mobile device. Sorry, but
none of this makes any sense to me and is just a bunch of silly semantic
games rather than anything to do with privacy, security or even user
freedom. It’s not relevant to the real world and I’m not interested in a
discussion based on irrational religious beliefs.

We officially support userdebug builds of GrapheneOS and you should not be
claiming that we do don’t. We do not support persistent or app-accessible
root and never will do that. The only form of root access that will ever be
supported is temporary root access by the owner of the device until reboot
on userdebug builds. We already provide this, and don’t pretend that it
doesn’t have severe consequences. It makes no sense for people to pay the
cost of those consequences when they aren’t ever going to use it and have
no use case for it, which is why it’s limited to userdebug builds. We
officially support it and it’s available + tested for each stable release.
What you are claiming doesn’t add up at all. If you want us to provide
official userdebug builds, provide the necessary funding for a workstation
and server storage. You cannot reasonably complain that we do not offer
them when you folks do not make it possible for us to offer them despite us
wanting to do it. It is not my fault that the few people who want this are
unwilling to support it. The vast majority of people do not want it and we
cannot reasonably use their donations to provide it. It would not be what
they donated to support. The tiny niche of people who want official
userdebug builds would need to put together the resources for us to build
and host the releases or stick to doing it themselves which a couple people
are already doing. They are free to publish their builds for others. I’ve
put in a huge amount of work to have an extremely well documented build
process along with it being incredibly easy to deploy over-the-air updates
including delta updates to minimize bandwidth usage. I’ve put a lot of work
into writing and publishing scripts to make everything easier including
managing signing keys encrypted with scrypt + AES, fully signing releases
with those, etc. along with making sure builds are reproducible and fixing
issues with that.

Non-root enforcement can be considered a lesser form of tyrant software o=
r
antifeature since it doesn=E2=80=99t restrict flashing an alternative, bu=
t cripples
the system in major ways. You might argue you can use userdebug version o=
r
software fork and compile a version that doesn=E2=80=99t do this, but the=
n the
networking effect and scale of a project becomes so great that rolling an
own fork has negligible effects and upstream choices are the de-facto sta=
te
of things.

You’re trying to misrepresent a userdebug as comparable to a fork or
modification of the OS. It’s an official supported build variant of
GrapheneOS. It uses exactly the same sources without any modification and
does not have any negative network effect. The scale of the ‘project’ is
non-existent since it already exists and officially supported. You just
don’t like that it isn’t what most people want and don’t want to admit that
it has serious downsides. We’re already having to consider dropping devices
and scaling back aspects of the project even without doubling the amounts
of builds. If the few people who want this won’t provide funding, how do
you expect it to happen? Either people need to provide funding for us to do
it or they need to do it themselves. It’s already officially supported and
the only thing that needs to be done is building a debug variant of the
entire OS for each supported device. I’d need a whole new local
workstation/server support that without disrupting development and even
then I’d still need to invest my time in building and maintaining another
workstation and dealing with making these builds on it. I am willing to
invest my time in that but I’m not spending a bunch of money building a
powerful server to build a dozen extra builds of the OS in a reasonable
amount of time.

Non-root enforcement is also similar to DRM. While DRM is about

applications which don=E2=80=99t allow users to easily, freely copy data =
on their
own devices, non-root enforcement here leads to users not able to
backup/copy/migrate their application data from one phone to another.
Either the application has a backup / data export feature or data is
=E2=80=9Ctrapped=E2=80=9D inside the phone. Even with a app dependent app=
data backup
feature, it=E2=80=99s better if users who choose so can get access to the=
raw data
stored by the app for convenience (not using tons of different data expor=
t
features rather than scripting backups, data export feature may be
incomplete, analysis of app data by user).

Maybe you should educate yourself about Android including backup services,
ADB and userdebug builds before making outlandish claims about it. AOSP and
GrapheneOS have an official OS backup mechanism available via both ADB and
backup services integrated into the OS. GrapheneOS includes Seedvault, but
the backup functionality is available via ADB either way.

Non-root enforcement also aids DRM enabled applications. If GrapheneOS
gets more popular, perhaps picked up my phone manufacturers or resellers,
mobile carriers it will be easy for application developers to utilize DRM=
.
To prevent the user from accessing application data. Making phone work in
the interest of the application developer rather than phone user.

A fork of GrapheneOS signed with different keys is not GrapheneOS and is
not something we can control short of not using the Free Software licenses
that you hold so dear. The only solution to people forking the project and
using it in ways that we don’t want is disallowing it in the licensing
which you would be against. Do you want us to forbid commercial usage of
the OS again?

It sounds like you have a problem with the consequences of truly free
software which includes being able to turn it into proprietary software or
using it to create locked down systems. That’s unlike non-free licenses
like GPLv3 which forbid certain kinds of products. GPLv2-only licensing
forbids mixing it with GPLv3 code. The license is incompatible with ITSELF.
Linux kernel code cannot be included in GPLv3 projects. GPLv3 code cannot
be included in the Linux kernel. That is a severe restriction freedom with
serious real world consequences. The restrictions on freedom by the GPLv2
and GPLv3 also prevent including the code in projects which cannot or will
not make those concessions, so code that is locked up as GPLv2/GPLv3 cannot
be sent back upstream to projects like OpenBSD. Forking permissively
licensed code and locking it down as GPLv3 is something that I consider bad
behavior and it’s certainly in opposition to wanting to be a good citizen
and contributing whatever we can back upstream. OpenBSD considers GPL to be
a non-free license and will not include GPLv2 code whenever they can avoid
it. They absolutely will not ever include GPLv3 code as they consider it
completely non-free. These definitions of freedom are very subjective and
most of the world does not agree with your extreme ideological views.
Freedom includes the freedom to make a locked down, highly secure device
significantly more resistant to compromise. It includes the freedom to
support and use features like verified boot, which is also counter to that
ideology.

Software that moved to GPLv3 was entirely replaced nearly everywhere and
most GPLv2 software other than Linux has better alternatives. Linux itself
has horrible safety/robustness/security with no realistic way of fixing it
and a trajectory headed for making it far worse over time, so incrementally
or outright replacing it is important. That will be the end of the normal
GPL licenses in most places. What’s the justification for the GPL
restricting freedom when it hasn’t actually worked and has deterred people
from using that software and driven them towards freer licenses like MIT?
It sounds like you want an even more restricted license if you expect DRM
to be forbidden. The only way of forbidding those things is restricting the
freedom to do it like GPLv3 but actually beyond GPLv3 with clauses that
would be incompatible with it since they’d have to go further… maybe that
is exactly what the GPLv4 will do and even fewer people will use it, and it
will drive people away from using GPLv2/GPLv3 even further just like GPLv3
did to GPLv2.

Phone manufacturers or resellers, mobile carriers couldn=E2=80=99t be bla=
med for
refusing root access. That would already be the GrapheneOS default. They
could conveniently blame it on =E2=80=9Csecurity=E2=80=9D. Some power use=
rs might be able
to flash a root-enabled version but the effect would be negligible. In
practice, this will result in a lot users having their freedom restricted=
.

A fork of GrapheneOS signed with different keys is not GrapheneOS so you
are not talking about GrapheneOS anymore but rather something else not
relevant to the discussion. I love the manipulative scare quotes around
security even though the security issues it causes are very real and very
impactful. It’s far more relevant than zero day exploitation for most users=
.

[1] But even the security vs user freedom view is a false dichotomy.
Bootloaders allow for flexibility to boot into a root-enabled mode. There
could be a key combination and/or boot menu which allows users to boot in=
to
root-enabled mode. There could be timeouts [user has to wait 5 seconds
before proceeding a anti-root warning] / strong warnings. Booting into
root-enabled mode could make subsequent boots into non-root enabled mode
show a warning that the device may be compromised due to previous boot in=
to
root-enabled mode.

It isn’t a false dichotomy. I suggest reading through what I’ve written,
doing your research and actually putting in some effort to understand the
security model. There are already userdebug builds, i.e. a root enabled
mode requiring enabling an option within the owner account AND having
physical access to the device at the time that it’s used. It is ‘temporary’
in that the option to use it remains available but the access itself goes
away on reboot - but not whatever changes were made to persistent state
using that access.

The question is rather, how much time/effort/money would be required to
grant user freedom (root) in a secure way (such as alternative boot
options)? If you were offered 1 million USD and had time, could you
implement root access in a secure way? This is unrealistic and just and
example to encourage imagination of solutions. Is this really a question =
of
unsolvable security issues vs user freedom? Or is it rather prioritizatio=
n
effort/time/money required to implement user freedom (root) vs other goal=
s
(just don=E2=80=99t prioritize on user freedom, make something work for n=
ovice
users, more quickly monetize (understood, we all need to eat)).

There is already official support for userdebug builds in GrapheneOS.
People are free to use those but it isn’t what the vast majority of the
GrapheneOS community wants to use. It has hardly any use case aside from OS
development. Give me 20k USD in funding and I will publish official
userdebug builds signed with different keys alongside the regular releases
for 2 years after I get the parts to build a new workstation/server, etc. I
cannot do it without build hardware and server space for it. It could also
just go on a separate update server.

Calm down, no need to get worked up.

It’s your choice to spend your time attacking the project by spreading misinformation and spin about it

You’re trying to misrepresent

etc.

Newsflash: different people, different opinions.

Are you going to apologize? No.

The argument got heated unfortunately and the mobile section is outdated.

Although it is hosted at kicksecure now, the thread is here, so maybe better to continue here.

I know this is an old thread, but that page needs some reviews.

I could be a mediator (I use graphenos, but not a dev)… anyway, after two years nobody is angry anymore and possibly some stances have changed.

I’m not an android or mobile ps developer, just sharing what I know.

A lot of the points below are just simplifications of the post above, just to not repeat the same text and be more as a summary.

I will compare whonix design to gos to clarify to Patrick some points, although they are distinct systems and security features, I will use this method so it can be better understood, not that the options presented are equivalent.

So let’s see:

Sections: separate into recommended and not recommended OSes.

Madaidam agrees for clarity.
Patrick’s disagrees because it is just a page for listing mobile projects.
I agree because I don’t think just mentioning OSes and some sources helps making a good decision, needs more detailed information.
If grapheneos gives easy to use root, then attackers will have the same.

I would leave GrapheneOS, Android and Iphone on the recommended list for security. One can only be private if they are secure.

The rest of the projects I’d put then in the “avoid” section, as they do not implement modern security features, sometimes even disable it.

Root

  • Argues that allowing users to gain root (superuser) access would inevitably break the security model and that there is no conceivable solution that can uphold both user security and freedom.
  • Potential Conflict of Interest. If GrapheneOS wouldn’t disable easy to use technical ways that most laymen users can use to gain root and/or to keep control over the software running on their devices, then GrapheneOS’s chances to be ever get a highly profitable hardware producer partnership would be severely diminished.

User don’t need root and is very much recommended to not use it. Legacy applications required root to do things that now lesser privileged permissions can do.
If root is allowed on the distributed build, then devices would be prone to a great variety of attacks.

Also userdebug builds mentioned by Daniel above, it takes development time and anyone can do it. Would be the same thing as asking to ship an I2P ws and gateway.

Just an example, lets do “s/root/netvm/g”
AOSP user should use application without having root.
Whonix user should use applications in the WS, without having control of the netvm.

AOSP user should use the user space for applications, and without root, they can not do nasty things, or even nastier.
Whonix user should use the workstation space for applications, and without access to the gateway they are safe.

AOSP user needs to build to get root.
Whonix user needs access to the gateway to change the gateway.

AOSP and Whonix are not restricting freedom, they are enforcing security, on AOSP Root through not adding it, and on Whonix through isolating the machines.

I understand that a Whonix user can edit the GW easily, and GOS root is not easy, needs build. But that is security stance.

Verified boot

Full verified boot which would be great if the key would be held by users and encouraged through a first start process or similar instead of held by the developer.

The user would need to sign build and sign the releases.
But here is the thing, this is not a GOS issue, nor a AOSP issue, this is a feature that can be used by anyone that builds the program. GOS is using the verified boot but it needs a signing key and a repo distributor, and that is the project contributors.

On a linux desktop such as debian we only need to download a signing key and change a sources list entry, the same way that is easy, it is much worse for security.

So it is not restricting the user, they have build instructions Build | GrapheneOS.
Also, for AOSP, having an entry on the settings to add a new repo would be insufficient, someone has to sign firmware for verified boot to work.

Where is such a list? At time of writing, there is no recommendation on that page.

It’s possible to implement this in a secure way. When there’s a will, there’s a way. But if there’s no will, there’s certainly reasons.

At the very least this could be implemented as a boot option. Here’s the design plan how this user/admin/superadmin isolation could implemented in Kicksecure / Whonix: Multiple Boot Modes for Better Security: an Implementation of Untrusted Root

But it’s not wanted by GrapheneOS lead developer:

Quote GrapheneOS lead developer

GrapheneOS is not aimed at power users or hobbyists aiming to tinker with their devices more than they can via the stock OS or AOSP.

Refusing root rights without user data wipe has many repercussions. From the table iPhone and Android:

  • Internal storage can reasonably easily be removed and mounted elsewhere for the purpose of data recovery or hunting malware / rootkits. - No.
  • Internal storage can reasonably easily be decrypted once transferred to a different device if password is known. - No.
  • Can reasonably easily boot from external hard drive, ignoring internal harddrive for purpose of data recovery or hunting malware / rootkits.- No. But any Android phone currently has this issue.
  • Can reasonably easily create full data backup. No.
  • Applications cannot refuse data backup (for purpose of malware, spyware analysis or backup and restore). - No.
  • No culture of users can ask device (code) for permission and device (code) will decide to grant or refuse the request. - No.

Investigation of any compromise without root is hindered. Not possible to create a full raw backup, boot, create another full raw backup and then compare the changes on the disk.

Without such essential freedoms easily accessible, I consider this a platform inherent security risk as a (high profile) user suspecting they’ve been compromised, cannot hand their device to anyone capable of malware investigation. The data isn’t accessible. The device locks out the user from their own data with no recourse.

And that was asked. And someone is free to point out “there’s no I2P on Whonix-Gateway”, “there’s no I2P inside Whonix-Workstation”.

In Installation and Fix of i2p inside Whonix-Workstation by Default I have beenanswering to the maximum of my ability any and created config options to simplify this (the redirect custom workstation ports to the gateway using anon-ws-distable-stacked-tor part).

Such modding even if not implemented in the default Whonix builds is however very much welcomed.

Indeed. Undeniable. It’s a question of whether the projects wants that or not. Plans the feature, prioritizes or rejects the feature.

I guess it boils down to:

Does one oppose the war on War on General Purpose Computing?

Or asked in a different way…

Does one support the right to general computing?

The easiness of providing freedom is the critical point. Is the ease of general computing a development goal or not. Should the user be in ultimate control of all the programs running on their device or should developers control users through Device Attestation such as SafetyNet. That’s a point that should I want to be included in the mobile project comparison.

Should the user be in control or should the app vendors be in control?

Not providing easy access to root rights and supporting device attestation means that the app vendors should be in control.

There is none.
What I am proposing is creating new sections in the MobileOS page.
Recommended and Avoid sections.
Madaidam said to separate here

No, it is not possible.
Requires unlocking the bootloader, disabling verified boot.

Explained above this would remove security features.
Also multiple boot modes on Mobile is for debugging purposes, not for secure usage.
On computer, multiple boot options is available easily because there is not even secure boot for the majority of pcs.
I think, also as this removes the verified boot, there will be malware persistence across reboots.
https://madaidans-insecurities.github.io/android.html#rooting
In addition, root fundamentally breaks verified boot and other security features by placing excessive trust in persistent state.

No but I believe that is because the decryption keys are in the chip.

Not sure what this means exactly but I can backup my profile/user, but probably not all the partitions. So it is not “full” backup, but full “user” backup.

Isn’t that the security of it, not being able to run any root commands.

Not easy to do that on mobile, yes there is few vendors, but with OEM unlocked and build and signed, it is possible, just not documented on grapheneos site.

Regarding root, locking vs not documenting and advising against is a different thing.

Regarding attestation, GrapheneOS uses their own Attestation Server. Although on Auditor app, you cannot change the server, you will need to build it yourself with your server configured.
But, you don’t even need their servers, you can download auditor app to another android device and verify the first device with it.


In summary, you (Patrick)

  • wish for root easily available as a boot mode or build distributed.
  • to have full control of the device, but as of now, requires you to build and sign, and that is how mobile systems works.

On the other hand GOS, AOSP:

  • will not provide easy for because of many security issues, including breaking verified boot
  • breaking verified boot leads to malware persistence
  • full control of the device via should not be implemented for distributed user builds

The war on general purpose computing to make any user have root breaks the security model of AOSP.

Having control over everything is great, but that is for advanced users, there is no vendor locking from GOS, it is just not distributed with root as it is not suitable for users.

Also, the manual malware check of the device is not needed because of verified boot.
So in the end, what remains, is full system backup for some unkown reason because malware changes are reverted on boot via verified boot and user data backup is enough to create the same user again.

  • /root (system, vendor, oem) - cleaned on boot
  • /user_data (user space applications) - can be backed up

I don’t know how malware inspection works on mobile, but with the verified boot, it greatly reduces the attack.

No. There is no hard technical requirement for that. Lower levels (such as firmware) and hands over control to the next level (such as bootloader). If any stage is broken, then verified boot is broken. The levels are approximately: hardware trust root → firmware → (shim ->) (grub) bootloader → (linux) kernel → kernel modules → initrd → (systemd) init.

For example, you can have verified boot SecureBoot (actually RestrictedBoot) on the Intel/AMD64 platform. Windows uses it.

As for Linux desktop distributions, I don’t think any offers full verified boot support yet. But that partial verified boot process is interesting to use an as example how this could be implemented. The firmware has a built-in key (by Microsoft). It verifies shim and if the signature checks out, it hands over control to shim. Next, shim verifies grub and hands over control to it. Next, grub verifies the kernel. Then it should verify the initrd but that’s not implemented in any Linux desktop operating system as far as I know. (Chromium OS …) That’s where the verified boot implementation stops for now.

So for most Linux desktop distributions currently the verified boot isn’t very useful. It’s a development progress start but it’s incomplete. The takeaway message is, that any level during the boot process can implement its own key management.

For example, on Linux systems using verified boot, users have to manually sign kernel modules so these can be load. But what does that tell us? That users can add their own keys which the Linux kernel accepts to verify kernel modules. There’s no need to add user custom key to the firmware just to load custom installed kernel module (such as the VirtualBox kernel module or LKRG).

Consider a design where the current way to implement verified boot is called the “the unbreakable stage1 verified boot”.

This stage1 verified boot (immutable, verity-protected, signed file system) could verity a stage2 image. If it’s modified it could report it and refuse to boot (unless some kernel parameter is set by user but this needs more description).

Similar to Android A/B (Seamless) System Updates concept where when the upgrade is supposedly atomic. If it fails, it reverts to the previous one. Two images. The old one and the new one. And these don’t break verified boot either.

This blog post goes into that direction: Fitting Everything Together
But while probably well intentioned, the “Developer Mode” described in that blog post could also lead to further user freedom restrictions.

At the moment the big surveillance corporate complex cannot yet lobby the governments to mandate that only big corporate signed full verified boot operating systems can boot. But if most devices are verified boot anyway and only a few Linux desktop computers remain while they’re one day also only available with full verified boot… At some point there’s not much devices left where general computing is possible without the approval (signature) of a third-party. And then a law mandating that only corporate signed full verified boot operating systems can boot becomes more likely.

Indeed. It’s mentioned in the Android vs iPhone table footnotes.

I am referring to the following. It’s in the table, footnotes…

Quote <uygulama>  |  Android Developers

android:allowBackup

Whether to allow the application to participate in the backup and restore infrastructure. If this attribute is set to false, no backup or restore of the application will ever be performed, even by a full-system backup that would otherwise cause all application data to be saved via adb. The default value of this attribute is true.

Currently on most Androids (which have verified boot + refuse root rights to the user), if an application is configured by the vendor allowBackup=false, then no backups are possible unless verified boot is broken and/or the device has been rooted.

If no backups are possible, then it’s also not possible to inspect the data. In essence, the app vendor gets a bit of private storage on the user’s device which the user cannot access.

Operating system and app vendors dictating such restriction to users is a bad direction to go specifically if more and more devices are already locked out the user in that way (most phones used my most users, smart TV, tables and whatnot) and are already developing towards that direction and the few remaining ones, which are desktop computers are already on the decline as well as going into the same direction. Referring to Intel/AMD64 platform with RestrictedBoot.

The the question boils down, which one does one support. Power to the hardware/software vendors or power to the users.

No. Verified boot unfortunately isn’t a cure for all the security threats. If verified boot is broken, then it’s broken. Meaning, then malware can persist.

Any locked Android where the modding community managed to found a vulnerability to exploit it to free/unlock the bootloader is evidence that verified boot has been broken in past. To illustrate:

  • When the modding community broke the bootloader then they did it to gain control over their hardware.
  • When Advanced Mobile Phone Spyware breaks verified boot / the bootloader, then they do this to spy on the user why at the same time keeping the user locked out from auditing. Malware uses technically similar methods to how the modding community sometimes archives unlocking of the bootloader. This then it invalidates any security advantages by verified boot. And that is then the worst of both worlds. Malware has the ability to spy on the user while the user still has no access ability to perform an audit of their device.

User freedom and security auditing is crucial in security. Verified boot and locking the user out is “trust us”.

It mostly doesn’t. Security researchers have to invent complex hacks to get around the obfuscation and locks. Requires bootloader unlock. But if bootloader unlock isn’t possible without user data wipe then auditing is prevented. Here’s a research paper that briefly mentions research hurdles vs user locked-out bootloaders as well as the research methods.

Quote:

Reverse Engineering

A fairly substantial amount of non-trivial reverse engineering is generally required in order to decrypt messages and to at least partially decode the binary plaintext. 1) Handset Rooting: The first step is to gain a shell on the handset with elevated privileges, i.e. in the case of Android to root the handset. This allows us then to (i) obtain copies of the system apps and their data, (ii) use a debugger to instrument and modify running apps (e.g. to extract encryption keys from memory and bypass security checks), and (iii) install a trusted SSL root certificate to allow HTTPS decryption, as we explain below. Rooting typically requires unlocking the bootloader to facilitate access to the so-called fastboot mode, disabling boot image verification and patching the system image. Unlocking the bootloader is often the hardest of these steps, since many handset manufacturers discourage bootloader unlocking. Some, such as Oppo, go so far as to entirely remove fastboot mode (the relevant code is not compiled into the bootloader). The importance of this is that it effectively places a constraint on the handset manufacturers / mobile OSes that we can analyse.

Unbreakable verified boot + user locked out = no auditing possible for users. They are then only users, guests, non-administrators on their own devices.

Expanded a lot on bootloader locking, verified boot, non-root enforcement versus root rights, the ideological conflict of user-freedom prioritization versus app developer prioritization, user controlled keys, verified boot + locked bootloader + root compatibility.


Added:

The section about GrapheneOS is misleading/inaccurate and should be fixed/removed.
Disclosure: I am a moderator on GrapheneOS’s Matrix and Telegram channels (not an actual developer).

Comes with numerous anti-features. Some of the same anti-features as Google Android Anti-Features

Without arguing whether the features mentioned are anti features or not (though most of them are security features, not anti features imo), they are the same on every Android distribution, including CalyxOS, /e/ OS, LineageOS, and CopperHeadOS. It is unfair to say this specially about GrapheneOS but not the others operating systems.

Argues that allowing users to gain root (superuser) access would inevitably break the security model and that there is no conceivable solution that can uphold both user security and freedom.

Only LineageOS and /e/ OS ships userdebug builds. I know that CalyxOS does not, and I assume CopperHeadOS does not either. Again, it is unfair to say this about GrapheneOS and not the other operating systems.

Worth noting:

  • Neither LineageOS or /e/ OS support verified boot
  • Both of them come with significant security regressions beyond just not having verified boot, including not shipping firmware updates. /e/ bundles in years old versions of Orbot and call it their “IP scrambler”. This is specific to those 2 operating systems and not mentioned anywhere on the wiki.
  • Unlike on Linux, apps are designed to work without root, so there really isn’t any significant reduction in freedom.

Sometimes when they use the word “security” in connection with GrapheneOS, they do not mean what is normally understand normally mean by that word: protecting your machine from things you do not want. They mean upholding the much praised “Android Security Model”, which includes providing guarantees to app developers that the operating system will behave in a certain way at the expense of user freedom (anti-features).

Not sure where this even comes from. GrapheneOS provides significant user controls over what apps can and cannot do beyond just the “Android Security Model”.

See some of its user-facing features:

  • Network permission toggle
  • Sensor permission toggle
  • Storage Scopes
  • SUPL control
  • Sandboxed Play Services (which runs Play Services unprivileged and force it to play by the permission system)

GrapheneOS already provides much better control and guarantees regarding what apps can and cannot access compared to a “rooted” Android phones, either via adb or via Magisk.

  • Denied access to the devices host’s file (“/etc/hosts”) which can be used to block advertisements.

This is from the Android anti-features section, but I want to point out that DNS based blocking can still be done with a VPN/custom DNS server. Regardless, either solution is privacy and security theatre and are trivially bypassable.

More and more businesses communicate over proprietary messengers such as WhatsApp and WhatsApp cannot be used on rooted devices or with custom ROMs.

WhatsApp works just fine on GrapheneOS.

More and more government services require the same. For example, an Android or iPhone with Google maps location history enabled and Skype is mandatory for entering Japan. Google maps is produced by Google and Skype produced by Microsoft are among the worst privacy-intrusive companies.

This is not GrapheneOS’s problem. If the government wants that information, then you have to give them said information. It doesn’t even matter if it is Android or a traditional Linux desktop operating system.

Many people would loose their job if they decided not to use for example WhatsApp since many companies internally use WhatsApp.

Again, WhatsApp works perfectly fine on GrapheneOS.

Three are still 2 billion unbanked people. People who do not even have access to the most basic financial services such as a bank account. For unbanked people it would be unreasonable and should not be expected of them to refuse their first chance to use a mobile banking app with such restrictions

A significant amount of banking applications do work on GrapheneOS. I have a crowd sourced list of them on my website. Even if an app does not work, there is nothing stopping them from logging in using a web browser, just like on a computer.

Supports DRM (Digital Restrictions Management / walled garden / anti-freedom / Google SafetyNet style hardware attestation where developers can configure their applications to only run on devices on certified firmware which are a technologies that are part of the War on General Purpose Computing.

Given what is written in the wiki - people not being able to using WhatsApp for their jobs, banking apps not working, etc because of DRM - would you prefer it if GrapheneOS did not support DRM at all? Because this will not result in those app changing - it will just result in people not being able to use them, which is the problem at hand.

Besides, how is this GrapheneOS’s problem? Things like SafetyNet is a common issue with custom OSes, not GrapheneOS specific. Why is this not mentioned against CalyxOS, CopperHeadOS, LineageOS, and /e/ OS?

Potential Conflict of Interest. If GrapheneOS wouldn’t disable easy to use technical ways that most laymen users can use to gain root and/or to keep control over the software running on their devices, then GrapheneOS’s chances to be ever get a highly profitable hardware producer partnership would be severely diminished.

The supported “ways” to gain root are not in the stock OS, GrapheneOS can’t “disable” it when it doesn’t exist to begin with. Also, as mentioned above, the apps are designed to work unprivileged, so what control are you even losing? If anything, I would argue that designing a system with root then catering to apps which insist on having it is anti freedom, because it will make it significantly harder if not impossible to control what those apps can do. With that being said, the same thing can also be said about every other Android based OS on the list, so why is it only said about GrapheneOS?

Full verified boot which would be great if the key would be held by users and encouraged through a first start process or similar instead of held by the developer.

The user can make their own build and sign with their own keys. The same thing can be said about every other Android based operating systems. I don’t see how it could work any other way.

The other approaches like Heads are akin to downloading random binaries from the developers then blindly signing them and gaining 0 security/freedom in the process. If anything, such approach is quite bad because you cannot even do automatic updates anymore. If you use automatic updates and see a warning, you would have no idea if it is because of an update or because of actual tampering/corruption.

Ironically, in order to to purchase a device compatible with GrapheneOS, one has to buy a supported Google Pixel device and therefore support with the purchase one of the biggest anti privacy, most data harvesting and user freedom prohibiting companies in the world, Google.

Currently only the Pixels meet the hardware requirements, one of which is support for verified boot for third party operating systems. GrapheneOS is about providing actual security and privacy for the end user, not about being anti-Google.

There are also other inaccuracies, such as:

Location information: IP address, GPS, and other sensors providing information on nearby services such as Wi-Fi access points and cell towers. It was recently discovered Google continues to track users even after they opt-out of Location History.

This is just plain wrong. What’s going on here is that the user disable location history on the Google account settings (entirely policies based) as opposed to using the location toggle on the OS (which is OS enforced). This is a messed up configuration on the user’s part, not a problem with Android, even on the “Google Android” phones.

Local storage: Storing personal information locally with local browser storage (like HTML5) and application data caches.

This is every-operating-system-and-non-Tor-browser ever.

Regading /e/:

Open source as much as possible.

Nope. This is just marketing. See DivestOS’s notes on /e/ as an example (I am unable to post links so I will just quote part of it here):

  • Includes the proprietary Mapbox library
  • With a tracker
  • Includes proprietary Google Widevine DRM on nearly all devices
  • Includes the proprietary Magic Earth app for navigation Despite FOSS user friendly alternatives existing such as OSMAnd and Organic Maps
  • Enables Safetynet checks by default which downloads and executes obfuscated proprietary code from Google

Regarding Lineage:

Google services can be optionally installed as an add-on

Which is as privileged as stock OS. Don’t see how this fits into the whole freedom/privacy/open source/anonymity thing. The Sandboxed Play Services that’s available on GrapheneOS is what’s actually providing privacy/security/freedom, not this.

Regarding Fairphone:

Hardware: Now the third iteration Fairphone 3 is available and is a testament to the success of the prior models.

They are on the fourth generation now, and both the 3rd and 4th generation have botched verified boot because they use the AVB test keys.

Built for easy hardware repairs and upgrades to combat planned obsolescence .

This is just marketing - they ship software updates late and the SOC is already 1 year old or so when the phone comes out. Effectively, a fairphone only has around 2 years of security update for the firmware SINCE THE RELEASE DATE as opposed to 5 years on the Pixel.

Regarding OnePlus:

  • Hardware that grants users the “right to flash”

Nope. It is extremely broken. See CalyxOS’s blog post on OnePlus 8+9 firmware issue and DivestOS’s issue tracker for the OnePlus 7 series.

1 Like

Thank you for taking time and posting this here!

I am not sure this could be regarded as a revision request from a project representative but either way. Until this wiki chapter has been improved [1], I added the following change.

This wiki chapter about GrapheneOS is currently being under revision discussed here: https://forums.whonix.org/t/overview-of-mobile-projects-that-focus-on-either-and-or-security-privacy-anonymity-source-available-freedom-software/4557/48

In other words that chapter is offline for now so this can be discussed, edited without time pressure.

[1] Some points are valid. If any statements apply to GrapheneOS and other distributions, then these need to be moved to separate wiki chapters. For other points, it’s just non-ideal wording. And maybe more. Will see later.

It will take some time to research, address of of this. Sometimes I might quote, discuss some parts. Some stuff I might not address for now. (But that wiki chapter will stay offline or at least that part will stay removed until that was done.) In that case, I won’t be trying to avoid/ignore the point. It will be addressed later. Once your post has been fully addressed, I will say something like:

“Post Overview of Mobile Projects - That focus on either/and/or security, privacy, anonymity, source-available, Freedom Software. - #48 by TommyTran732 has now been fully processed (points raised researched, discussed, improved on the wiki page). Should some points have been forgotten or still apparently having some issue, please bring this up again.”

So far for the meta reply. The first reply addressing the content coming soonish.

I am addressing these points rather “randomly”. So easy (for me) things first that don’t require lots of research first.

All points regarding distributions (quoted now or not) other than GrapheneOS, you’re welcome to post references. That would really help. Ideally, original/good sources.

If it’s marketing but not actually true, I am eager to improve the wiki.

Where this is coming from… Here’s a quote, I wouldn’t know how to summarize and word it better for now:

Security is very important. Why? In order to not be exploited by strangers (criminals, spys…) against my interests. If security enables exploitation against my interests (by whomever, be it the OS vendor, the movie industry, or the government), it is not the security I want.

Here are some more citations from people that have the viewpoint:

There’s basically two groups.

  • A) The traditional, normal definition of computer security:

It’s about the security of the person who’s talking. I.e. the user’s security. “Freedom security”. The user can remove the microphone permission from the app? That’s “user security”. I.e. just “security”. An app is adamant about location permission or reading hardware serials and otherwise refused to work but the OS or some other app provides a way to provide fake information, that’s also “user security”.

  • B) The re-definition of “security” by cooperates such as Google, distributions such as GrapheneOS and others.

These are often security features that vendors want. These wants power to control how code is executed / data is processed on user’s devices. This includes DRM, SafetyNet.


An Android app can configure features such as allowbackup=false. This will prevent (or at least make it super difficult for the user) to backup application data. That’s an anti-feature. That’s probably part of the “Android Security Model”. I am part of group A). I don’t care what the Android Security Model has to say about this. Let me look at my data on my device. Let me do the backup.

If I cannot look at all data on my device through some normal, easy mechanism, then I am not really the person with the highest user rights, i.e. device administrator. Then I am just a user. Not an administrator. I am not secure if I cannot view that data. It’s part of malware analysis. It’s a basic feature of “normal” Linux distributions (such as Debian) where at least root has some way to view all application data and network traffic. So being able to analyze the application, to see what data it sends over the network or it stores (either the app stores it intentionally or because a third party compromised the app) is essential for malware analysis. An operating system having anti-features that prevent this are considered user freedom restrictions and that part is considered anti-security / insecure.

I didn’t test the functionality / check the security of implementation of GitHub - chriswoope/resign-android-image: Resign Android OS (esp. GrapheneOS) images with your signing keys and add ADB root and other modifications. But that’s irrelevant to explain the different uses of the word “security”. As far as resign-android-image’s readme goes, it goes exactly into the direction, having the goal of “real” security and user freedom, where the user and not someone else is in full control of their own devices.

Does this explain what this is about?

I am happy to elaborate about this in the wiki. Could probably use parts of this forum post of mine.

And yes, this wouldn’t only apply to GrapheneOS but also other distributions. So this will go to its own chapter.