Currently a Copperhead OS (and part time Cyanogenmod/Lineage) user – sadly, the Tor Project Copperhead version mentioned in the overview is hardly suitable for daily usage because of the painful updating process –, I have been watching:

It doesn’t make much sense to claim that not regressing the standard Android Open Source Project security model is restricting freedom. It already worked that way in the first place and an OS focused on improving privacy / security is in no position to be regressing security. Exposing root to the Android Debug Bridge like a userdebug build adds some attack surface and hurts the security model (SELinux policy). For users to leverage it, they need to enable developer options and connect via ADB. Even the basic userdebug ADB-accessible su breaks the security expectations of things like 2-factor authentication apps that are not supposed to have seeds that are possible to phish and may disable backups. The ADB-accessible root primarily useful for debugging the base OS, since debug builds of apps can already be debugged without root. The freedom is an unlockable bootloader and also very importantly full support for locking the bootloader with a 3rd party OS with verified boot / rollback protection available to the 3rd party OS. On a Nexus and Pixel device, the security of verified boot (and on a Pixel 2, direct key enforcement and rollback protection) is available to a 3rd party OS, and CopperheadOS may be the only case of that being used since everyone else is focused on rolling back security not matching the baseline and then substantially improving upon it.

What are users without the experience to build and flash going to do with ADB-accessible su? They can’t modify the OS partitions because there’s verified boot and updates are block-based. They would need to unlock the bootloader (disabling boot.img verified boot and allowing flashing) and disable verified boot for system/vendor with a modified boot.img. Since there are delta updates and updates are block-based (for verified boot), over-the-air updates would no longer work. They’d need to sideload updates via the full update package (to avoid deltas) and that ends up making a pristine set of OS and firmware partitions since it’s writing out blocks so any modifications would need to be done again.

Exposing root to requests from apps isn’t something that’s available in the Android Open Source Project and results in adding enormous attack surface and a drastically impacted security model even if it’s never enabled by the user. There’s no security-aware implementation of that either. All of them have had very obvious privilege escalation issues and were a small part of why CopperheadOS moved to AOSP from CyanogenMod due to it having massive security regressions from AOSP which is still true today for LineageOS. If the user ever actually uses that, they’re putting the entire application layer and the app itself as root attack surface too. A vulnerability in the application layer or the app is now a root exploit. Implementing features by having apps request root is an awful hack and isn’t the right approach in an OS with a basic level of security. There’s almost nothing running with those privileges in the base OS either, only very core processes like init/vold.

Similarly, you’re stating there that Pixel, Pixel XL, HiKey and HiKey 960 support is customer-only but that’s not true. HiKey / HiKey 960 support is for developers with no product available based on it. Official releases for the Pixel and Pixel XL can’t be purchased. They’re not published for non-internal use at all. The convenience of getting a device with the OS preloaded on it and receiving over-the-air updates can be purchased but not releases to flash on devices. The Nexus official releases were public but it prevented selling more than a handful of devices so the convenience gap was made larger by making the non-preloaded OS a source-based distribution instead of binary-based. The scripting to run an update server is public and it’s extremely easy to set one up so there are people running their own for their own builds. The server side is just a static web server and the update client is open source like the rest of the OS. The builds themselves are what take time and occasionally some small effort to debug a build environment issue like when curl had bugs in 7.56 breaking AOSP builds which impacted people on rolling releases adopting all the new stable releases.

You should probably just remove CopperheadOS from that page since it’s not very explicitly not ‘Libre’ per the usual definition and the title states that. The published sources do permit usage, modification and redistribution. However, doing those to earn a profit from it hasn’t been permitted since the Marshmallow-based releases that were GPL3-licensed and earlier releases that were more permissively licensed. It was going to come to an end because it wasn’t at all sustainable but instead it switched to a sustainable licensing model where commercial usage requires paying for it and now it’s a sustainable business able to grow and hire more developers to accomplish the technical goals. Everything that can be landed in the upstream projects is also still submitted / landed there. GPL3 didn’t make that any different since the upstream projects wouldn’t accept it like that. Either way, code is relicensed for submitting changes to AOSP, etc. No one contributed before, and the sources were just used to build competing businesses without giving anything back, unlike the partnership we have with Google where we give back to the project we’re building on.

Yes, it isn’t easy at all. Perhaps something like this: »su is only
available in user debug builds, so very few users are actually capable
of obtaining root on their own devices«? – and without exclamation
mark?

The Whonix wiki, more so the /Dev pages sometimes act as my personal notepad. There has been a discussion about the feasibility of a mobile version of Whonix or other things. In such as discussion, inevitably CopperheadOS comes up. Therefore I wrote down all the major pros- and cons of mobile platforms coming to head useful for such research:

that somehow release some source code

that somehow are focused on FLOSS

claims about security

thestinger:

It doesn’t make much sense to claim that not regressing the standard Android Open Source Project security model is restricting freedom.

I find the standard Android Open Source Project security model freedom restricting. Any derivative not changing this, inherits this freedom restriction. To gain root, one has to go through a rain dance, i.e. flashing a new image.

I believe there could be an non-internet facing settings app, which could show an appropriate warning, which would then let the user enable root access. It should be possible to implement this in a way so this won’t be degrading the security of users who would not use that option. Not having such an option in my opinion is freedom restricting. Closing the ticket in where the matter https://github.com/copperhead/bugtracker/issues/236 was discussed as well as its tonality further strengthened my perception that the freedom of granting root is not on your priority list. Goes without saying, it’s your freedom to act like this and my freedom to point that out.

Should there be an incorrect statements, these of course shall be corrected.

thestinger:

The freedom is an unlockable bootloader and also very importantly full support for locking the bootloader with a 3rd party OS […]

That is very, very nice, however not a substitute for optional root.

thestinger:

You should probably just remove CopperheadOS from that page since it’s not very explicitly not ‘Libre’ per the usual definition and the title states that.

Changed the page title to Overview of Mobile Projects with the subtext That focus on either/and/or security, privacy, anonymity, source-available, Libre Software.

I was over interpreting this by the copperhead website stating Open-source and free of proprietary services Uses alternatives to Google apps/services like F-Droid.

Added a bulletpoint about being nonfree software and source-available.

rob1:

su is only available in user debug builds, so very few users are actually capable of obtaining root on their own devices

It should be possible to implement this in a way so this won’t be degrading the security of users who would not use that option.

It’s fundamentally not possible to implement it in a way that doesn’t degrade the security of regular users. That’s wishful thinking. It can be implemented in a way that doesn’t have much a substantial negative impact, which is what userdebug builds already do and they are available and work just as well for regular use. It’s no more convenient to make a user build of CopperheadOS than a userdebug build of CopperheadOS so for someone using it on their own, neither of them is more within reach than the other. The user build is definitely more secure because it doesn’t permit root, debugging the base OS, etc. so it eliminates a fair bit of attack surface. If there wasn’t a different, we would just ship something closer to userdebug builds because the only people suffering any real inconvenience from it are ourselves, the developers, not users. It’s not good for much more than debugging the base OS. Users can debug apps either way, and verified boot prevents modifying the OS without flashing with or without root.

You also haven’t explained what you mean by root access and what you think is gained by having it. What don’t people have with ADB access that they would have with ADB-accessible su like a userdebug build? Tell me what freedom it gives them that they don’t already have today, and how it makes it any easier to have that freedom than the existing options. Either way, they are attaching a USB cable, enabling control over the device from the attached computer in the OS and then leveraging it. The adb and fastboot tools come together too. The hardest part is dealing with using a shell.

You’re uncritically repeating things like the claim that a baseband on a separate chip is somehow more isolated than one on the same die as a CPU. In reality, IOMMU containment for SoC components tends to be much better than peripheral components because it doesn’t rely on each phone vendor and the vendors of peripherals they’re including deciding to care about IOMMU security and writing proper drivers / configuration.

CopperheadOS doesn’t fit on that page at all anyway. It’s the only OS you listed that’s doing hardening to improve privacy / security via technical means. The others want to bring people different sets of non-security features / frills with a focus on power users that want more fine-grained control over their OS. They roll back lots of security features as part of doing that and as part of porting AOSP to a broader set of devices. That’s the opposite of the goals of CopperheadOS. It takes the completely opposite direction which starts from the baseline of preserving all existing security features including verified boot and standard SELinux policy along with sticking to devices with proper maintenance of firmware / drivers. The full development effort is spent on implementing hardening, as opposed to not spending any time on that and rolling back security features. It makes complete sense that there are operating systems focused on meeting different needs with varying goals and there’s nothing wrong with that. I don’t understand why you think it’s a problem that people have different needs or how it would make sense for us to hurt our users by adding attack surface.

The part that I find hilarious about this whole thing that I see repeated time and time again is that for some reason we keep spending our time upstreaming hardening features into AOSP and to a lesser extent Linux, LLVM, etc. and the thanks we get back from the FOSS community is endless misinformation spread about us. I don’t know why you folks can’t just leave us in peace.

Patrick:

source-available

It’s more than source-available, it can be modified and redistributed.

Patrick:

Goes without saying, it’s your freedom to act like this and my freedom to point that out.

It’s your freedom to spread misleading information and spin about us and it’s our freedom to write our own post on our site calling you out on it. Unlike you, I’m going to stick to accurate, objective statements and I won’t try to mislead people with disingenuous spin.

There are devices where we only distribute userdebug builds (HiKey, HiKey 960) because it’s what the clients paid to have. There are no official user builds for HiKey and HiKey 960. How does that fit into the claims published there?

A userdebug build is also needed to avoid having kernel.modules_disabled=1 set in early boot, otherwise even with su there isn’t control over the kernel. It doesn’t simply add a su binary usable by the ADB shell user to escalate privileges. It sets up a whole SELinux domain for su where everything is permitted with a domain transition to it, along with a bunch of changes to other domains to support debugging them, etc. It has a pretty big impact on the security of the SELinux policy since it’s very locked down without this. Usually, there’s barely anything running close to real root: primarily init and vold and even those don’t necessary have much control over the kernel.

Exposing root to apps is a whole different story. That significantly reduces security by having complex state and security checks leading to full root access from the application layer. We found multiple privilege vulnerabilities in the CyanogenMod (now LineageOS) su implementation when we lightly audited it. It’s much more complex than the userdebug su which checks for the shell user (AID_SHELL) from a setuid binary. Simply having state controlling whether root is enabled / accessible to apps and state for which apps have access to root destroys the security model. It opens up a huge attack surface for escalating to root, including completely bypassing verified boot. There’s not much point of verified boot if you have state that malware can use to persist with root / kernel level privileges… the main purpose of verified boot is to make it more difficult for an attacker to gain root and keep it across a reboot. A more extreme take on verified boot can extend that to any code execution at all, which is essentially the goal that’s being worked towards starting with higher privileged code execution. An attacker should need a verified boot exploit to persist with a high level of privilege, or they should need to persist with low privileges and exploit the OS again. It doesn’t just make persistent harder but means factory resets have very useful security guarantees and so do updates.

When we put effort into not only leveraging the existing verified boot feature but making substantial improvements to it, including removing dependence on far less trusted state than something like root access controls, it would be ridiculous to throw it all away and completely break the feature by completely trusting state with root access. It’s not going to be easy to make progress towards eliminating trust in state and there are going to be bigger sacrifices than a niche feature that wasn’t present before we started. Eliminating usage of /data/dalvik-cache is largely to avoid having persistent state with non-system_server system app level privileges by not loading any code from state (/data) in the base OS. There are still issues like fairly trusted package manager state, etc. that need to be eliminated. The last thing we’re going to do is undo all of that security work to add features that aren’t wanted by the niche CopperheadOS is aimed at in the first place.

CopperheadOS makes substantial sacrifices unrelated to power user control to implement security features. Lots of exploit mitigations have a significant performance cost, and some have a compatibility cost in terms of breaking apps that have latent bugs uncovered by them. There are sacrifices to app compatibility to improve the permission model by tightening up SELinux policy and the high level Android permission model too. For example, a network monitor is bundled with the OS and placed in a special SELinux domain with access to network statistics and regular apps have access to network statistics removed. Elsewhere, apps can use /proc/net/tcp to monitor all connections made by other processes on the system. The hidepid=2 feature we integrated and then upstreamed is very similar, as are other usability/flexibility tradeoffs made by Android and the changes CopperheadOS makes to take that further.

Improving security is not easy and comes with sacrifices. It has always been clear to us that we weren’t going to be able to do that while pleasing power users and it was never our goal. The sources are published and allow people with the knowledge / skills to make their own decisions about these tradeoffs to make them on their own, without needing to negatively impact our customers. There isn’t any code whatsoever that isn’t published or any internal build documentation. Our employees build with the same build documentation that everyone else can use.

CopperheadOS is also intended to eventually be targeted at much less technical users, where the main threats are the user being tricked into doing something like installing a malicious app, or granting malicious privileges. The main dangers for them are able to sideload apps, grant dangerous privileges without a constant reminder, turn apps into device managers, turn apps into accessibility services or grant scary special privileges like drawing over other apps (which was crippled to not draw over system UI and to show a warning). Having dangerous features within reach is very bad for those users. Most people aren’t as technical as anyone reading stuff here and they need to someone to look out for them. The answer is not expecting everyone to become a technical expert rather than using devices as appliances / tools and those devices making it very difficult to shoot themselves in the foot.

The same empty feel good statement about “freedom” could be used about safe programming languages… which remove flexibility and the ability to shoot yourselves in the foot and drastically improve security by eliminating classes of vulnerabilities like memory corruption / type confusion / dynamic code execution bugs which are the top sources of software vulnerabilities. The ability to do whatever you want is still there, but rather than a shotgun that’s always within reach it’s locked away in a safe. That’s how root access works for CopperheadOS. It is available because users have ultimate control over the devices as long as they have the password to unlock + access developer settings + enable oem unlocking + unlock with physical access + make whatever changes they want. That’s how CopperheadOS can run on those devices in the first place…