The Guardian Projecthttps://guardianproject.info
People, Apps and Code You Can TrustTue, 28 Jul 2015 08:38:37 +0000en-UShourly1http://wordpress.org/?v=4.2.3Orfox: Aspiring to bring Tor Browser to Androidhttps://guardianproject.info/2015/06/30/orfox-aspiring-to-bring-tor-browser-to-android/
https://guardianproject.info/2015/06/30/orfox-aspiring-to-bring-tor-browser-to-android/#commentsTue, 30 Jun 2015 19:32:16 +0000https://guardianproject.info/?p=12999Read More ...]]>In the summer of 2014 (https://lists.mayfirst.org/pipermail/guardian-dev/2014-August/003717.html), we announced that the results of work by Amogh Pradeep (https://github.com/amoghbl1), our 2014 Google Summer of Code student, has proven we could build Firefox for Android with some of the settings and configurations from the Tor Browser desktop software. We called this app Orfox, in homage to Orbot and our current Orweb browser. This was a good first step, but we were doing the build on Mozilla’s Firefox code repository, and then retrofitting pieces from Tor Browser’s code, which wasn’t the right way to do things, honestly.

This summer (2015!), with fantastic continued effort by Amogh, we have switched to building the Orfox mobile app directly from the Tor Browser code repository, successfully working through any mobile OS incompatibilities in the security hardening patches added by the Tor Browser team. We also had the additional task of reviewing the Android application code in Firefox, that is not part of Tor Browser, in order to modify and patch it to work inline with the Tor Browser requirements and design document.

As of today, we have a stable alpha release ready for testing, and are rapidly moving towards a public beta in a few weeks. Our plan is to actively encourage users to move from Orweb to Orfox, and stop active development of Orweb, even removing to from the Google Play Store. If users really want to continue using a WebView-based solution and do not need all of the capabilities that Orfox/Tor Browser provides, they can use Lightning Browser (https://github.com/anthonycr/Lightning-Browser), a lightweight, open-source app that offers automatic Orbot (SOCKS) proxying on start-up.

Orfox is built from the same source code as Tor Browser (which is built upon Firefox), but with a few minor modifications to the privacy enhancing features to make them compatible with Firefox for Android and the Android operating system. In as many ways as possible, we will adhere to the design goals of Tor Browser (https://www.torproject.org/projects/torbrowser/design/), by supporting as much of their actual code as possible, and extending their work into the additional AF-Droid appndroid components of Firefox for Android.

Orfox does not currently include the mobile versions of HTTPS Everywhere, No Script and the Tor Browser Button, but these we will be added shortly, now that we have discovered how to properly support automatic installation of extensions on Android (https://dev.guardianproject.info/issues/5360)

Orfox includes a “Request Mobile Site” option that allows you to change the user-agent from the standard Tor Browser agent to a modified Android specific one: “Mozilla/5.0 (Android; Mobile; rv:31.0) Gecko/20100101 Firefox/31.0″. (https://dev.guardianproject.info/issues/5404). This is useful for being able to see the mobile version of a website, but does reduce the amount your browser blends in with other browsers.

Orfox currently allows for users to bookmark sites, and may have additional data written to disk beyond what the core gecko browser component does. We are still auditing all disk write code, and determining how to appropriately disable or harden it. (https://dev.guardianproject.info/issues/5437)

Orfox removes the Android permissions for Contacts, Camera, Microphone, Location and NFC (https://dev.guardianproject.info/issues/3822) since the capability of using these features are not in line with the spirit of Tor Browser

Orweb is our current default browser for Orbot/Tor mobile users (https://guardianproject.info/apps/orweb) that has been downloaded over 2 million times. It is VERY VERY SIMPLE, as it only has one tab, no bookmark capability, and an extremely minimal user experience.

Orweb is built upon the bundled WebView (Webkit) browser component inside of the Android operating system. This has proven to be problematic because we cannot control the version of that component, and cannot upgrade it directly when bugs are found. In addition, Google has made it very difficult to effectively control the network proxy settings of all aspects of this component, making it difficult to guarantee that traffic will not leak on all devices and OS versions.

Orweb also only provides a very limited amount of capability of Tor Browser, primarily related to reducing browser fingerprinting, minimizing disk writes, and cookie and history management. It trys to mimic some of the settings of Tor Browser, but doesn’t actually use any of the actual code written for Tor Browser security hardening.

Orweb does have an advantage which is that it less than 2MB while Orfox is in the 25-30MB range. This is primarily because Orweb relies on many components built into Android, so it does not need to bundle them. Orfox contains the full stack of code necessary for a complete browser, and thus is more secure and dependable, but also larger. The Mozilla Mobile team is working on reducing the size of their binaries, and the Orfox team is focused on this, as well, since we are disabling some of the components that have contributed the browser bloat.

]]>https://guardianproject.info/2015/06/30/orfox-aspiring-to-bring-tor-browser-to-android/feed/0Building a trustworthy app store that respects privacyhttps://guardianproject.info/2015/06/02/building-a-trustworthy-app-store-that-respects-privacy/
https://guardianproject.info/2015/06/02/building-a-trustworthy-app-store-that-respects-privacy/#commentsTue, 02 Jun 2015 20:38:03 +0000https://guardianproject.info/?p=12950Read More ...]]>One core piece of our approach is thinking about very high risk situations, like Ai Weiwei or Edward Snowden, then making the tools for operating under that pressure as easy to use as possible. That means that we might occasionally come across as a little paranoid. It is important to dive into the depths of what might be possible. That is an essential step in evaluating what the risks and defenses are, and how to prioritize them. Making usable software is not just making things easy, but rather making tools for real world situations that are a simple as possible.

We recently received some vindication of our paranoia: we have been resistant to putting all of our trust into the Google Play app store, despite many obvious advantages. Even though Google Play is probably the most secure of the big app stores, its security approach is rather thin, relying mainly on HTTPS with no signature for verification, and the Five Eyes partnership (NSA, GCHQ, etc) noticed this, and worked to exploit it.

The Android/Google Play security model is relatively simple, and that is mostly a good thing. There are two essential pieces: the signature on the APK file itself and the TLS connection to Google that provides the APK file. Once an app is installed, all APK files used to update an app must have a matching signing key. That provides a reasonably strong mechanism to defend against malware that wants to install over existing apps.

Unlike package systems like Debian, there is no path to verify that the APK signing key. That means Google Play relies heavily on the TLS transport encryption to protect the APK files for when installing an Android apps for the first time. The first time an app is installed, the signing key in that app’s APK file is blindly trusted (this is called “Trust On First Use” or TOFU). It turns out that TOFU has a solid track record for security in the real world. One key aspect of implementing a good TOFU system is to make the first use indistinguishable from any other use, so that it is difficult to target only first uses while ignoring repeat uses. Intercepting repeat uses is very likely to trigger a warning and alert the user that something is wrong.

Now let’s put together the pieces based on what the Chinese government can do. A few TLS certificate authorities have been caught issuing fakecertificates. A company affiliated with CNNIC was caught issuing certificates for Google domains. A trusted certificate authority can issue usable certificates for any domain, so any computer that trusts CNNIC would trust their fake certificates for Google. That lets the Chinese government transparently Man-in-the-Middle traffic to Google servers. China could then use the Great Firewall to generate targeted malware on the fly, seeing the user credentials that Google Play requires, seeing the list of apps that each user has installed, etc. Then when the targeted user goes to install a new app, the APK file is intercepted, malware is added, then it is re-signed and transparently sent off to the user.

This targeted malware can be designed to avoid the malware scanners in Google Play, Lookout, etc. since it would be direct addition of code rather than via an exploit. It would be just adding Java classes to the APK. Or alternatively, in combination with some of the signing exploits that have been discovered in Android, like Master Key, the Great Firewall is able to inject malware into the real APK itself without changing the signature.

Of course, when Google Play’s TLS connection includes X.509 certificate pinning, then the above attack would not be possible since the client would have a whitelist of certificate authorities that it trusts for play.google.com, and CNNIC would probably not be on that whitelist. This highlights the importance of pinning certificate authorities in apps that need good security over TLS or HTTPS. All TLS connections support pinning at the system level starting in Android 4.2. We are crazy enough to support down to Android 2.3 since there are lots of older Android devices in use, and even new devices being sold with Android 2.3.3. That means we think about making apps self-contained in terms of security improvements like pinning.

It gets worse

Many indigenous app stores like Cafe Bazaar and Xiaomi’s MiMarket lack basic protections like TLS, making targeted attacks trivial for governments, or even anyone who gains control of a piece of the network path. These days that is actually easy to do by exploiting home routers, which are generallyeasy to exploit. One of those botnets would easily start looking for app installs in the network traffic, then add exploits accordingly. As long as the first install is easy to detect and the user easy to track, then the malware can transparently inject malware designed to be difficult to detect by malware scanners and people alike.

The Alternative

FDroid also has the key advantage of being designed from the beginning to avoid tracking users, and to use proven methods of delivering software, following the signed repository model of Debian, Ubuntu, etc. but then served over a solid HTTPS channel for increased privacy and a backup layer of security. It is also possible to use privacy proxies like Tor or I2P via the proxy settings. There is no user credentials needed, it is all free software, so FDroid users can even hide themselves from the server delivering the apps, as well as any network observers. Since all APKs are delivered via signed metadata that is verified using a key built into the FDroid client app, there is no risk of getting served malware even if the HTTPS connection is completely and transparently broken.

As part of our Bazaar Project, we have been putting more and more efforts into the FDroid project, and working to make it much easier to use. All Guardian Project apps are available in FDroid, as well as all the core apps that you might need like Firefox, a Twitter client, K-9 email, etc. Tech journalist Dan Gillmor agrees: free software that respects privacy is not only for the über-geek anymore.

]]>https://guardianproject.info/2015/06/02/building-a-trustworthy-app-store-that-respects-privacy/feed/0Hiding Apps in Plain Sighthttps://guardianproject.info/2015/05/07/hiding-apps-in-plain-sight/
https://guardianproject.info/2015/05/07/hiding-apps-in-plain-sight/#commentsThu, 07 May 2015 13:25:10 +0000https://guardianproject.info/?p=12937Read More ...]]>Beyond just thinking about encryption of data over the wire, or at rest on your mobile device, we also consider physical access to your mobile device, as one of the possible things we need to defend against. Some of our apps, such as Courier, our secure news reader, include a Panic feature, enabling a user to quickly delete data or remove the app, if they fear their device will be taken from them, whether by a friend, family member, criminal or an authority figure. Most recently, with our work on CameraV, our secure evidence camera app, we have implemented a few more features that help hide the app and its data, in order to block an unintended person from seeing the photos and videos captured by it.

First, it should be said that the app utilizes IOCipher, CacheWord and the CameraCipher Library to store all media files it captures in an encrypted format, managed with a well-implemented service that handles key generation and life-cycle properly. This means that no photos and videos show in the device’s built-in gallery or photos app, and no pixels are ever written in plain-text to any storage space, internal or external. This helps a great deal in hiding that they exist, since often physical inspection of a device often starts with looking through any of the default apps, like messaging, gallery, contacts apps, and so on. ChatSecure also does this, be keeping your contacts, messages and media out of the shared, unencrypted default location.

As of this week, we have had three new features to CameraV that all fall under what could be called “Stealth Mode” (though this has also been called “Boss Mode” since the days of MS-DOS when games included a quick button to change to something that looked like a spreadsheet for when your boss walked by). We took our inspiration from a few other apps, like Amnesty International’s Panic Button which hides itself as a calculator, ChainFire’s SuperSU, which allows users to switch the app icon between a few options, Courier, which blocks users and other apps from taking screenshots of the news it is display, and Orbot, which actively removes itself from the “Recent Apps” listing provided by Android. All of these features combined dramatically reduce the visual footprint that an app leaves on the device, reducing the chance that someone will discover it, even if they are looking for it.

CameraV settings for stealth mode

CameraV masquerading as “CV Settings”

CameraV blocking screenshots in recent apps

CameraV (you can get beta access here and find the source here), incorporates all of these as options for the user to activate. You can switch the default icon and app name to a more generic settings icon and “CV Settings” app name. We plan to enhance that feature to allow the user to define the icon and name, making the app able to act like a chameleon and blend in more completely. The app can be set to now allow screenshots to be taken of it, which also causes a black screen to show up in the recent apps list, stopping a casual inspection from identifying it as a photos-type app. It can also be set to not show up in the recent apps list at all, which is a more complete solution to that problem. The last piece, again taken from the aforementioned PanicButton app, is to, when the stealth icon is activated, to change the default home screen of the app to something innocuous like a calculator, so that even when the app is opened, it does not reveal its true nature. It is even possible to completely hide the app in the launcher, until a system event like a phone call to a specific number or a certain wifi network is connected, to make the app reveal itself again.

These are just some of the initial ideas and techniques we have gathered and implemented. We plan to provide this set of capabilities in all of the apps we offer, and hope to spread them as standard features that any app that contains sensitive data or is meant for use by people in high-risk situations, should offer. I would love to hear your thoughts on other techniques that could be used, see code snippets you might have to achieve those, or discuss how and when this whole concept may or may not be effective.

For now, stay safe out there, and that goes for your data and apps, too!

As part of Debian’s project in Google Summer of Code, I’ll be working with two students, Kai-Chung Yan and Komal Sukhani, and another mentor from the Debian Java Team team, Markus Koschany. We are going to be working on getting the Android SDK and tools into Debian, as part of the Debian Android Tools team, building upon the existing work already included from the Java and Android Tools teams. This project is in conjunction with the Java team since there is overlap between Android and Java tools, like gradle, maven, etc. Since this work is in Debian, all of the Debian-derivatives will automatically inherit this work. That includes: Ubuntu, Mint, Elementary, and many more.

The first question a lot of Android developers are probably asking is: why would we want to put the Android tools into Debian when there is already an official distribution from Google with it’s own update tools? It turns out there are many reasons, mostly centered around making things much easier to use, as well as addressing some key security concerns. For example:

automatic trustworthy downloads, no need to verify hash sums or think about HTTPS

eliminate need for insecure wrapper scripts, like ./gradlew

easy install and update channel that all Debian users already know

trivial install for specific tools, like adb, fastboot, etc.

setting up a Debian/Ubuntu/etc box for Android development is easier when everything is included

The most glaring issue from my point of view is the security issues in gradle. It will happily download and execute code without any kind of verification whatsoever. It inherits this terrible practice from maven, which has been shown to be an easy path to exploit anyone using it. This is especially concerning considering that developers are more and more being directly targeted. At least it is more common for gradle configs to use HTTPS, but it is still quite easy mess up a config and force users to use HTTP instead. Fragile configs are really bad for security. Even if gradle-witness is used to pin the hash for the jars used in the project, gradle-wrapper might still downloading insecure code an executing it immediately, giving attackers potential full user access to that machine. That is because gradle-wrapper will download versions of gradle that it needs, and gradle-witness can not be used to pin the hash of the gradle files. And the repositories that gradle uses only provide methods to protect against network-based attacks. If the server that holds the jars is exploited, the attacker can replace the jars and the sum files at the same time. There is a pull request open for gradle to allow pinning of the gradle executables themselves, which will help this situation.

On a different note, many people who are not developers at all want to use tools like adb and fastboot to access their Android device, or even root it. Having them in Debian means they are trivial for people to install, vastly easier than trying to figure out how to download and install the Android SDK. What lots of people end up doing instead is downloading random binaries from insecure internet forums and using those. For many devices, it is already possible to use only tools in Debian to root the device. As we get more of the Android tools packaged and updated in Debian, that will become the norm.

Updates when you need them, built upon a stable base

One common complaint about packages in Debian is that they are old and outdated. It is part of the core mission of Debian/stable to provide an operating system that changes as little as possible. That mission is contrary to what most developers need from their SDKs and sometimes even the development tools. But stability is also important for developers as well. For example, tools like make, used to build native code using the Android NDK (ndk-build is a make script) and even Android itself, has been around a long time and is used in so many projects. That is a tool that almost every developer wants to have very stable.

For the packages that developers need to have completely up-to-date, like the Android SDK itself, there are many options for distribution. Ubuntu Personal Package Archives (PPA) have proven easy and useful for exactly this kind of thing, and Debian is working on adding support for PPAs. Official repositories for backports are another avenue for timely updates.

Help us figure this out

We want lots of feedback on how to do this right! A great example is how to best support the various versions of gradle. It seems to me that gradle is starting to stabilize, and it is no longer necessary to track very specific releases of gradle. For example, gradle v2.2.1 will work well with projects that were setup with just about any v2.x version. And projects still using 1.x, they mostly seem to work using v1.12. So if this is the case, then this fits into a common pattern with build tools in Debian:

GNU Compiler Collection is packaged as gcc4.8, gcc4.7, etc.

Apache Maven is packaged as maven and maven2

GNU automake is packaged as automake1.14, automake1.13, etc.

I’m currently thinking that the best solution for gradle is like Maven, with the package called gradle (v2.3) being the most up-to-date in conjunction with specific packages to support older versions, like gradle1 (v1.12). But maybe it makes sense to do something like gcc, with a gcc meta-package to install the currently best supported version, then all versions packaged with name that includes that version, i.e. a gradle meta-package with gradle1, gradle2, gradle3, etc.

Other issues that we will have to grapple with include:

how to package various NDK versions?

How do we best work with the upstream Android team?

is packaging Android Studio feasible?

We also hope to provide an example that any other packaging systems can learn from and build upon. GNU/Linux distros like Arch and Fedora are the obvious ones, but also projects like Homebrew, MacPorts, and Cygwin could also use this work to include Android tools as packages in their system. Indeed, some of the work already included in Debian was derived from some Arch packages.

]]>https://guardianproject.info/2015/04/30/getting-android-tools-into-debian/feed/0Phishing for developershttps://guardianproject.info/2015/02/24/phishing-for-developers/
https://guardianproject.info/2015/02/24/phishing-for-developers/#commentsTue, 24 Feb 2015 09:41:29 +0000https://guardianproject.info/?p=12868Read More ...]]>I recently received a very interesting phishing email directed at developers with apps in Google Play. One open question is, how targeted it was: did anyone else get this?

It turns out that Google has been recently stepping up enforcement of certain terms, so it looks like some people are taking advantage of that. It is a pretty sophisticated or manually targeted phishing email since they got the name of the app, email address, and project name all correct. The one detail that gives it away is that the From: address uses the fake domain, even though it would have been possible to send the email using the actual Google account in the From: field. But this likely would have triggered spam and malware detection algorithms. So they took a subtly different approach by using a real Google address in the Reply-To:. But they were clever enough to use the same sub-domain, gooogle.com.de, in the From: address as in the phishing link accounts.gooogle.com.de, following a Google pattern of subdomains. They also included other real Google links for support and as a “follow up” URL.

When I received this, I didn’t notice the clickable link in the email since I never view HTML email. I forwarded it on to our internal email list where others figured out it was fake. In the HTML version of the email, it has this link from the fake domain accounts.gooogle.com.de:

<p><b>Your application will be removed</b> if you do not sign in to the <a
href="http://accounts.gooogle.com.de/ServiceLogin?service=androiddeveloper&passive=1209600&continue=https://play.google.com/apps/publish/&followup=https://play.google.com/apps/publish/&type=3days&pkg=org.torproject.android">Developer
Console</a>

This attacker might have been targeting anyone who would fall for the trick, without really caring what kind of app it was. For any accounts that the attacker got access to, they would be able to change the description text, home page, email address, etc. transparently without raising any particular warning signs. The attacker could place a recommendation in the app descriptions to also install another app, and that app would be the attacker’s malware.

The attacker could not upload their own updates to an existing app, because Google Play checks uploaded APKs to make sure that the signing keys match the APKs that are already there. The attacker could create a whole new app in that developer’s account, and hope to gain installs since it would be related. Google Play has a standard view to show users apps by the same developer, for example.

Two-factor authentication and beyond

If a developer fell for this phishing attack, but had the forethought to have set up Google 2-Step Verification, then even if the phisher got the username and password, they would be unable to log into that account since they would not have access to the two-factor SMS or Google Authenticator message. All developer accounts on Google Play should be required to use Google 2-Step Verification. Set it up now, if you have not already!

We also need to consider the kinds of sophisticated attacks from large state actors that are leaking out to the public. Indeed, many of these attacks are also available for any government to purchase from companies like Finfisher. And it is only a matter of time before these techniques are widespread and easier, following the rule of “attacks never get worse; they only get better”. This phishing website could also contain malicious Javascript that installs malware that can both log all key strokes in search of passwords, as well as search for known secret caches like Java keystores for Android signing keys, and browser cookies that allow the user to skip two-factor authentication, like the cookie from Google’s two-step authentication.

One takeaway here: developers should never keep or use their APK signing keys on a machine that they also use to read email and browse the web.

Full source of the email

Here is the full source of the original email that I received, for those who might be interested in digging deeper. Another detail you can see there is that the email was not sent using Google infrastructure at all.

Return-Path: <noreply-developer-googleplay@gooogle.com.de>
X-Spam-Checker-Version: SpamAssassin 3.3.2 (2011-06-06) on
rodolpho.mayfirst.org
X-Spam-Level: *
X-Spam-Status: No, score=1.3 required=5.0 tests=HTML_MESSAGE,RDNS_NONE
autolearn=disabled version=3.3.2
X-Original-To: support@guardianproject.info
Delivered-To: gphans@rodolpho.mayfirst.org
Received: from rodolpho.mayfirst.org (localhost [127.0.0.1])
by rodolpho.mayfirst.org (Postfix) with ESMTP id 4CFCD5E3D
for <support@guardianproject.info>; Fri, 20 Feb 2015 04:30:50 -0500 (EST)
X-Greylist: delayed 543 seconds by postgrey-1.34 at rodolpho; Fri, 20 Feb 2015
04:30:49 EST
Received: from astra1695.startdedicated.com (unknown [85.25.194.40])
by rodolpho.mayfirst.org (Postfix) with ESMTP id D74C83CD84
for <support@guardianproject.info>; Fri, 20 Feb 2015 04:30:48 -0500 (EST)
Received: from gooogle.com.de (astra1695 [85.25.194.40])
by astra1695.startdedicated.com (Postfix) with ESMTPA id 209D57C0918
for <support@guardianproject.info>; Fri, 20 Feb 2015 10:21:32 +0100 (CET)
Date: Fri, 20 Feb 2015 09:21:32 +0000
To: The Tor Project <support@guardianproject.info>
From: Google Play Developer Support <noreply-developer-googleplay@gooogle.com.de>
Reply-To: Google Play Developer Support <noreply-developer-googleplay@google.com>
Subject: 7-Day Notification of Google Play Developer Term Violation
Message-ID: <7f72540087c81ffe2ead560425d0d477@gooogle.com.de>
X-Priority: 3
X-Mailer: PHPMailer 5.2.9 (https://github.com/PHPMailer/PHPMailer/)
MIME-Version: 1.0
Content-Type: multipart/alternative;
boundary="b1_7f72540087c81ffe2ead560425d0d477"
Content-Transfer-Encoding: 8bit
X-Virus-Scanned: ClamAV using ClamSMTP
--b1_7f72540087c81ffe2ead560425d0d477
Content-Type: text/plain; charset=us-ascii
Hello Google Play Developer,
This is a notification that your application, Orbot: Proxy with Tor, with
package ID org.torproject.android, is currently in violation of our developer
terms.
REASON FOR WARNING: Violation of the spam provisions of the Content Policy.
Please refer to the spam policy help article for more information.
Do not use irrelevant, misleading, or excessive keywords in apps descriptions,
titles, or metadata.
Please refer to the keyword spam policy help article for more information.
Your application will be removed if you do not sign in to the Developer
Console and make modifications to your application's description to bring it
into compliance within 7 days of the issuance of this notification.If you have
additional applications in your catalog, please also review them for
compliance. Note that any remaining applications found to be in violation will
be removed from the Google Play Store.
Please also consult the Policy and Best Practices and the Developer
Distribution Agreement as you bring your applications into compliance. You can
also review this Google Play Help Center article for more information on this
warning.
All violations are tracked. Serious or repeated violations of any nature will
result in the termination of your developer account, and investigation and
possible termination of related Google accounts.
Regards,
Google Play Team
1600 Amphitheatre Parkway
Mountain View, CA 94043
--b1_7f72540087c81ffe2ead560425d0d477
Content-Type: text/html; charset=us-ascii
<p>Hello Google Play Developer,</p>
<p>This is a notification that your application, <b>Orbot: Proxy with Tor</b>,
with package ID <b>org.torproject.android</b>, is currently in violation of
our developer terms.<br />
<b>REASON FOR WARNING</b>: Violation of the spam provisions of the Content
Policy. Please refer to the spam policy help article for more information.</p>
<p>Do not use irrelevant, misleading, or excessive keywords in apps
descriptions, titles, or metadata.<br />
Please refer to the keyword spam policy help article for more information.</p>
<p><b>Your application will be removed</b> if you do not sign in to the <a
href="http://accounts.gooogle.com.de/ServiceLogin?service=androiddeveloper&passive=1209600&continue=https://play.google.com/apps/publish/&followup=https://play.google.com/apps/publish/&type=3days&pkg=org.torproject.android">Developer
Console</a> and make modifications to your application's description to
bring it into compliance within <b>7 days</b> of the issuance of this
notification.<br>If you have additional applications in your catalog, please
also review them for compliance. Note that any remaining applications found to
be in violation will be removed from the Google Play Store.</p>
<p>Please also consult the <a
href="https://support.google.com/googleplay/android-developer/#topic=2364761">Policy
and Best Practices</a> and the <a
href="https://play.google.com/about/developer-distribution-agreement.html">Developer
Distribution Agreement</a> as you bring your applications into compliance. You
can also review this Google Play Help Center article for more information on
this warning.<br />
All violations are tracked. Serious or repeated violations of any nature will
result in the termination of your developer account, and investigation and
possible termination of related Google accounts.</p>
<p>Regards,<br>
Google Play Team<br>
1600 Amphitheatre Parkway<br>
Mountain View, CA 94043</p>
--b1_7f72540087c81ffe2ead560425d0d477--

]]>https://guardianproject.info/2015/02/24/phishing-for-developers/feed/2Complete, reproducible app distribution achieved!https://guardianproject.info/2015/02/11/complete-reproducible-app-distribution-achieved/
https://guardianproject.info/2015/02/11/complete-reproducible-app-distribution-achieved/#commentsWed, 11 Feb 2015 19:51:22 +0000https://guardianproject.info/?p=12847Read More ...]]>With F-Droid, we have been working towards getting a complete app distribution channel that is able to reproducibly build each Android app from source. while this may sound like a mundane detail, it does provide lots of tangible benefits. First, it means that anyone can verify that the app that they are using is 100% built from the source code, with nothing else added. That verifies that the app is indeed 100% free, open source software.

It also verifies that there have not been any malicious bits of code added into the app during the build process. As has been demonstrated in the 31c3 Reproducible Builds talk, just flipping a single bit is enough to create a usable exploit in an app.

The F-Droid project is leading the way with its system for publishing verified builds. We know have our first full example, building upon our previous work with making Lil’ Debi build reproducibly. We started with our simple little utility app Checkey since it has few moving parts (first get one working, then the rest).

=

When you download Checkey from f-droid.org, you will get an APK that was signed using the official Guardian Project offline signing key that was built by f-droid.org. No, we did not give them a copy of our key, instead, the fdroid publish process now looks for the Binaries: tag in the build recipe. If it sees that, it downloads that APK, then builds the app from source, then checks to make sure that they match using a simple diff of the APK contents and by checking that the signature on the official APK also validates on the APK that f-droid.org built.

Now that we have our little Checkey working, we can work towards getting all of our apps verifying in the same way, eliminating a whole field of exploits that we have to worry about. You can follow the progress of this work on the F-Droid wiki Reproducible Builds page, and learn about a future application of it on the Verification Server page.

]]>https://guardianproject.info/2015/02/11/complete-reproducible-app-distribution-achieved/feed/0Experimental app to improve privacy in location sharinghttps://guardianproject.info/2015/01/29/experimental-app-to-improve-privacy-in-location-sharing/
https://guardianproject.info/2015/01/29/experimental-app-to-improve-privacy-in-location-sharing/#commentsThu, 29 Jan 2015 12:36:58 +0000https://guardianproject.info/?p=12831Read More ...]]>As part of the T2 Panic effort, I’ve recently been diving deep into the issues of sharing location. It is unfortunately looking really bad, with many services, including Google, frequently sharing location as plain text over the network. I’ve started to write up some of the issues on this blog.

As part of this, I’ve put together an experimental Android app that aims to act as a privacy filter for all ways of sharing location. Mostly, that means it accepts all sorts of URLs from location services, and tries to parse the location from the URL, then rewrites it into a geo: URI, which is the standard way to share location in Android (and hopefully soon all others). As of ChatSecure v14.1.0, these geo: URLs are also clickable.

Many URLs are not parsable, like http://goo.gl/maps/Cji0V. LocationPrivacy then goes online and to try to fetch the location. This should happen over Tor, but it does not yet. You have been warned! Otherwise, it changes the URL to HTTPS on services that support it.

Please do not rely on this app for strong privacy, it is still very much a new, beta app.

]]>https://guardianproject.info/2015/01/29/experimental-app-to-improve-privacy-in-location-sharing/feed/0First working test of IOCipher for Obj-Chttps://guardianproject.info/2015/01/26/first-working-test-of-iocipher-for-obj-c/
https://guardianproject.info/2015/01/26/first-working-test-of-iocipher-for-obj-c/#commentsMon, 26 Jan 2015 09:32:29 +0000https://guardianproject.info/?p=12819Read More ...]]>Every so often, we revisit our core libraries in the process of improving our existing apps, and creating new ones. IOCipher has become a standard part of our apps since it provides a really easy way to include encrypted file storage in Android apps. And we are now working on spreading it to iOS as well, headed up by Chris Ballinger, with the first preliminary tests of IOCipher for Obj-C. Testing and contributions are most welcome! Find us in our chat room or mailing list for questions, or just post a comment below! Since the iOS version is based on the exact same core library, libsqlfs, the container files they produce will also be fully compatible with each other.

Now that iOS 8 has full disk encryption by default and a host of other security improvements, you might be wondering why you would bother with app-specific encryption. The problem with full disk encryption is that the disk is only locked when your iPhone is fully turned off. Using IOCipher adds protection for sensitive data that helps in a few different scenarios.

First, full disk encryption does not protect the data at all if malware is able to get root on the device. That malware will be free to read all files on the device. Second, for people who have not set up a strong passphrase on their iOS device, using app-specific encrypted storage make it harder to access that app’s data on devices with no passcode set, especially if any additional passphrase is stored in the keychain and disallowed from backup, or if it’s just stored in your own memory.

Third is for added protetion from forensic acquisition systems, which often work using root exploits in order to read the entire filesystem without unlocking the screen[1][2][3]. By having an app-specific encrypted file container that is not mounted like a filesystem, then even root cannot directly access the files in the container. Even root needs to get the key in order to unlock the IOCipher container, whether it is in use or not, and getting that key means either a key logger, which means planning ahead, or reading they key from memory if the container is unlocked, which is a more elaborate and targeted attack that full disk acquisition after rooting.

Now consider that there is a large market 0days, i.e. unpublished exploits, and companies like VUPEN, FinFisher, and Hacking Team making it easy to purchase them, even providing guarantees that one of their exploits will work within 30 days, it seems quite likely that customers of such companies have access to secret root exploits to even iOS 8. While there are ethical and lawful reasons to use software like this, many governments are also using them for illegalandunethicalthings. Since we believe that everyone has a right to privacy, to speak freely, and to peaceably protest, it is important to provide protection to people who are unfairly targeted.

There is also another key advantage of the IOCipher approach when it comes to mobile devices. IOCipher is ultimately based on SQLite transactions in SQLCipher, which means that it does not require being mounted in the normal sense. There is no open state once a transaction is complete. Each read or write operation is a self-contained SQLite transaction, so if the file system is forcably quit, SQLite’s transactions prevent the whole file system from being corrupted. This is important in mobile operating systems like Android and iOS since any app or process can be killed at any moment without warning. That means that the worst that can happen to an IOCipher volume is a single write command does not get written. The whole file system will not be corrupted if the process is killed.

Coming Soon

When IOCipher is used in conjunction with our CacheWord library, it is possible for an app to provide protection even against the $5 wrench attack. CacheWord generates a strong passphrase and manages feeding it to IOCipher and SQLCipher. The user provides their own password for encrypting that strong passphrase. That CacheWord file is tiny, and can be rapidly deleted. Once it is gone, the actual passphrase that unlocks the IOCipher encryption is gone, the user’s passphrase will not unlock IOCipher directly. This is something we are working to add in all of our apps, and to also hook it up to panic button triggers. We would be quite happy to see you beat us to it by adding this feature to your app!

IOCipher with a hardware security module (HSM) aka smartcard would be really nice, since it would provide some measure of added protection without the user setting an app-specific passphrase. HSMs provide write-only private key storage locked by pin code, so even if some was able to get the encrypted file and the pincode, they would not be able to retrieve the key to unlock the encrypted file. The only way to unlock the file would be with the physical device itself, or by finding the key backup, if that existed. This is possible now using an external microSDHSM

Facebook location sharing embeds the location in every single message, providing a detailed log to the recipient, Facebook, and anyone Facebook shares that data with

One handy feature that many smartphones give us is the ability to easily share our exact position with other people. You can see this feature in a lot of apps. Google Maps lets you click “Share” and send a URL via any method you have available. In Facebook Messenger, you can click a button and the people on the other side of the chat will receive a little embedded map showing the received location. Of course, the question we always ask is: how can we do this in a privacy-preserving way? And the follow up question: what kinds of information are apps leaking, storing, using, etc? Location is especially valuable and sensitive metadata, especially when there is a lot of it, because it can be used to derive so much information about a person. Most people do not want to publicly post their phone number or home address on the internet, yet are unwittingly giving away far more detailed information by using the various location-based services that are available. There is a lot of specific location information that people do not want to publicize that they visit: a cancer specialist, an abortion clinic, a criminal court, a mistress’ house, or any location information to an abusive spouse. For a great illustration of the power of location metadata, you can watch an animation of German politician Malte Spitz’s life, based on his telephone metadata that his telecom had stored.

Google, Facebook, and so many others make money by collecting as much data on their users as possible, then selling access to that data to their customers. So both those companies have incentives to make sure that you will always share your location information with them as well. The question is: are they treating this information as carefully as you would? In China, the indigenous services are much more popular than most foreign alternatives. The Chinese companies are good at making products that are popular with Chinese users, and since they collaborate with the government censorship and tracking, it is easier for them to do business in China. This combination often means that Chinese companies put security and privacy at a very low priority, even though they could comply with the Chinese law while improving their security. A good example of this is the fact that none of the major map providers in China (Baidu, Amap, or QQ) provide even an optional HTTPS interface. They only have unencrypted communications, which allows lots of people easy access to snooping, including anyone who is on the same wifi network as you are.

The tools for tracking people via location data are getting better, cheaper, and more available. One funny example is I Know Where Your Cat Lives, which shows the locations of cat pictures found on the public internet via the geo location included in the EXIF image data.

I know where your cat lives!

Location and Panic

One use that we are particularly interested in is sending location to trusted contacts when a panic button is pressed. When thinking about panic button features, privacy must be a central concern. When someone triggers their panic button, it is clearly a sensitive situation. That means that leaking more location information could exacerabate the situation. Since sending location is a useful and popular feature, it is important to consider the whole picture of where that location information might go. To start with, the panic message needs to be sent using a method that will reliably reach its intended destination. Unfortunately, that often means using insecure communications like SMS, or an app that is fully tapped by the same government that is detaining the user, like WeChat. Part of this T2 Panic research and development effort is focused on how to make a complete, secure panic solution. So we will also focus on making ChatSecure and other secure communications an available channel for sending panic messages.

The next step is to break down the entire path of where that location information might be intercepted. The first place is on the sending device itself. The panic message will stored with the sent messages with most communications apps, and that can recovered by whoever is detaining the user. Even if the device is encrypted, it is very likely the user can be compelled to unlock the device. So the panic message should be designed with that in mind.

So if we consider a fully anonymous method of communication, like ChatSecure’s “Secret Identity”, then protecting the location information becomes important even if all of the messages and their recipients are recovered from the sending device. The full “Secret Identity” procedure of creating an account per person you want to chat with, and only using that single account to communicate with that other person. It has been outlined by many people, including Laura Poitras when describing how she communicates with Edward Snowden. In this case, even if someone recovers the recipient address, all they will have is an anonymously created account with no other links to other accounts. Then location URL then becomes a way to deanonymize the recipient. First, if the URL takes the recipient to an unencrypted connection, then that it is easy to track. Even with an encrypted connection, if the server providing the map service is providing information to the government, then the encrypted connection will not help. Making this connection over Tor will also help since the map service will not be able to see the IP address of the device where the user clicked on this URL. Now consider a location URL using Google Maps, or any similar service where users frequently login. If the original panic message was sent using such a URL, and the recipient was a regular user of a service that used logins, then that login information would deanonymize the recipient if they viewed the location URL in a browser where they were also logged in with their normal Google account.

User Stories

This can perhaps be better illustrated using some quick user stories:

A journalist and a source set up Secret Identities in ChatSecure devoted to each other when they met up in person. Each have panic buttons set up to contact the other in case of emergency. The journalist uses https://openstreetmap.org to generate a shortlink that points to the chosen meeting location, then sends it to the source using the Secret Identity, http://osm.org/go/0ju_SMlBn. The source clicks the link, and chooses to open the link in Firefox. Therefore, the website is shown using an unencrypted, direct connection, which is easily observed. Even though the recipient has HTTPS Everywhere set up in his browser to force HTTPS for openstreetmap.org, the osm.org shortlink does not currently have working HTTPS so it is an HTTP link. This shortlink is now a unique ID that links the journalist and source’s real IP address. If the source was using a cellular internet connection, then this will also link the IP address to the devices IMEI unique ID. The IMEI is then quite easy to link to a real identity information.

A circle of activists all set each other up with a panic button app on burner Android phones. They only use these burner phones to communicate with each other. They prepare in advance to discard all the phones in case someone triggers the panic. One activitist gets detained by the secret police and triggers the panic. The secret police get the panic message and all the other phone numbers from the detainee’s phone, but the activists are no longer using those phones so they cannot be tracked by them. The activists manually copy the Google Maps shortlink https://goo.gl/maps/Cji0V to their computer to find out where the detainee is. They type the map link into Internet Explorer, making sure to type HTTPS, and then again confirm that the webpage is still using an HTTPS link. What they did not see is that the shortlink first redirected to a HTTP link http://maps.google.com/?q=28.118860,98.008069&hl=en&gl=us, which leaked the location in plain text. Since this URL describes a very specific point, the secret police use this as a data point to search for the IP address of all devices that have accessed that URL. Those IP addresses divulge the locations of all the activists who viewed the map URL, and provide the secret police a method for tracking them all.

I did not cover other more common use cases here because there are so many leaks that the protections presented are moot. All is not lost, there is still a lot that you can do to improve things. First off, we recommend using map apps that can work fully offline. For Android, Osmand is the best one out there, it uses OpenStreetMap data which can be freely downloaded. It is also important to encourage developers to improve the privacy of their apps. Since we are software developers, we file bug reports and make pull requests to nag location-related projects to improve their security. Here are some recent examples of what we have contributed:

We will be following up with further posts on this topic with more detail, including research into what is possible to derive from location data, technical details of issues, and possible solutions and work that can be done to improve things.

]]>https://guardianproject.info/2015/01/23/sharing-your-location-privately/feed/22015 is the Year of Bore-Sechttps://guardianproject.info/2015/01/02/2015-is-the-year-of-bore-sec/
https://guardianproject.info/2015/01/02/2015-is-the-year-of-bore-sec/#commentsFri, 02 Jan 2015 17:35:41 +0000https://guardianproject.info/?p=12755Read More ...]]>Over the last few months, the Guardian Project team has been thinking about how to approach the next five years of our work. An idea of “security so easy and seamless, that it is boring” came to the surface through some discussions. This led us to look for inspiration in important inventions and innovations of the past, that provide safety and security to all on a day-to-day basis, without the users of these technologies hardly thinking about them. This is no longer about James Bond super-spy technologies, it is about having as little impact on your day-to-day use of mobile technology while still providing the maximum protection to your data and communications, as possible.

Here then, as inspiration and our guiding lights, is our list of safety inventions of the past that we aim to be “Boring Like….”

Have some ideas of other great “Bore-Sec” tech? Add them in the comments!

With our work on critical security and privacy enhancements for mobile devices and apps, we aim to bore. The best security is the kind you don’t have to worry about, until you need it (and then you won’t know how you ever lived without it…)