tag:blogger.com,1999:blog-28730919128514403122015-03-23T23:56:50.087+09:00Android ExplorationsNikolay Elenkovhttps://plus.google.com/117221066931981967754noreply@blogger.comBlogger55125tag:blogger.com,1999:blog-2873091912851440312.post-10349682342840304702014-12-25T01:28:00.001+09:002015-02-20T09:35:00.042+09:00Dissecting Lollipop's Smart LockAndroid 5.0 (Lollipop) has been out for a while now, and most of its new features have been introduced, benchmarked, or complained about extensively. The new release also includes a number of of <a href="http://source.android.com/devices/tech/security/enhancements/enhancements50.html">security enhancements</a>, of which disk encryption has gotten probably the most media attention. Smart Lock (originally announced at Google I/O 2014), which allows bypassing the device lockscreen when certain environmental conditions are met, is probably the most user-visible new security feature. As such, it has also been discussed and <a href="http://www.androidpolice.com/tags/smart-lock/">blogged about</a> extensively. However, because Smart Lock is a proprietary feature incorporated in Google Play Services, not many details about its implementation or security level are available. This post will look into the Android framework extensions that Smart Lock is build upon, show how to use them to create your own unlock method, and finally briefly discuss its Play Services implementation.<br /><h2>Trust agents</h2><div>Smart Lock is build upon a new Lollipop feature called <i>trust agents</i>. To quote from the framework documentation, a trust agent is a 'service that notifies the system about whether it believes the environment of the device to be trusted.' &nbsp;The exact meaning of 'trusted' is up to the trust agent to define. When a trust agent believes it can trust the current environment, it notifies the system via a callback, and the system decides how to relax the security configuration of the device. &nbsp;In the current Android incarnation, being in a trusted environment grants the user the ability to bypass the lockscreen.<br /><br />Trust is granted per user, so each user's trust agents can be configured differently. Additionally, trust can be granted for a certain period of time, and the system automatically reverts to an untrusted state when that period expires. Device administrators can set the maximum trust period trust agents are allowed to set, or disable trust agents altogether.&nbsp;</div><h2>Trust agent API</h2><div>Trust agents are Android services which extend the <code>TrustAgentService</code> base class (not available in the public SDK). The base class provides methods for enabling the trust agent (<code>setManagingTrust()</code>), granting and revoking trust (<code>grant/revokeTrust()</code>), as well as a number of callback methods, as shown below.</div><br /><div><pre>public class TrustAgentService extends Service {<br /><br /> public void onUnlockAttempt(boolean successful) {<br /> }<br /><br /> public void onTrustTimeout() {<br /> }<br /><br /> private void onError(String msg) {<br /> Slog.v(TAG, "Remote exception while " + msg);<br /> }<br /><br /> public boolean onSetTrustAgentFeaturesEnabled(Bundle options) {<br /> return false;<br /> }<br /><br /> public final void grantTrust(<br /> final CharSequence message, <br /> final long durationMs, final boolean initiatedByUser) {<br /> //...<br /> }<br /><br /> public final void revokeTrust() {<br /> //...<br /> }<br /><br /> public final void setManagingTrust(boolean managingTrust) {<br /> //...<br /> }<br /><br /> @Override<br /> public final IBinder onBind(Intent intent) {<br /> return new TrustAgentServiceWrapper();<br /> }<br /> <br /> <br /> //...<br />}<br /></pre><br /></div><div>To be picked up by the system, a trust agent needs to be declared in <code>AndroidManifest.xml</code> with an intent filter for the&nbsp;<code>android.service.trust.TrustAgentService</code>&nbsp;action and require the <code>BIND_TRUST_AGENT</code> permission, as shown below. This ensures that only the system can bind to the trust agent, as the <code>BIND_TRUST_AGENT</code> permission requires the platform signature. A Binder API, which allows calling the agent from other processes, is provided by the <code>TrustAgentService</code> base class.&nbsp;</div><br /><div><pre>&lt;manifest ... &gt;<br /><br /> &lt;uses-permission android:name="android.permission.CONTROL_KEYGUARD" /&gt;<br /> &lt;uses-permission android:name="android.permission.PROVIDE_TRUST_AGENT" /&gt;<br /><br /> &lt;application ...&gt;<br /> &lt;service android:exported="true" <br /> android:label="@string/app_name" <br /> android:name=".GhettoTrustAgent" <br /> android:permission="android.permission.BIND_TRUST_AGENT"&gt;<br /> &lt;intent-filter&gt;<br /> &lt;action android:name="android.service.trust.TrustAgentService"/&gt;<br /> &lt;category android:name="android.intent.category.DEFAULT"/&gt;<br /> &lt;/intent-filter&gt;<br /><br /> &lt;meta-data android:name="android.service.trust.trustagent" <br /> android:resource="@xml/ghetto_trust_agent"/&gt;<br /> &lt;/service&gt;<br /> ...<br /> &lt;/application&gt;<br />&lt;/manifest&gt;<br /></pre><br /></div><div>The system Settings app scans app packages that match the intent filter shown above, checks if they hold the <code>PROVIDE_TRUST_AGENT</code> signature permission (defined in the <code>android</code> package) and shows them in the Trust agents screen (Settings-&gt;Security-&gt;Trust agents) if all required conditions are met. Currently only a single trust agent is supported, so only the first matched package is shown. Additionally, if the manifest declaration contains a &lt;meta-data&gt; tag that points to an XML resource that defines a settings activity (see below for an example), a menu entry that opens the settings activity is injected into the Security settings screen. </div><br /><div><pre>&lt;trust-agent xmlns:android="http://schemas.android.com/apk/res/android"<br /> android:title="Ghetto Unlock"<br /> android:summary="A bunch of unlock triggers"<br /> android:settingsActivity=".GhettoTrustAgentSettings" /&gt;<br /></pre><br /></div><br /><div>Here's how the Trusted agents screen might look like when a system app that declares a trusted agent is installed.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-CC8tI01CaY4/VJo6xwAE3eI/AAAAAAAAYgQ/Qaxz1XX_LM4/s1600/trust-agents.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-CC8tI01CaY4/VJo6xwAE3eI/AAAAAAAAYgQ/Qaxz1XX_LM4/s1600/trust-agents.png" height="640" width="384" /></a></div></div><br /><div>Trust agents are inactive by default (unless part of the system image), and are activated when the user toggles the switch in the screen above. Active agents are ultimately managed by the system <code>TrustManagerService</code> which also keeps a log of trust-related events. You can get the current trust state and dump the even log using the <code>dumpsys</code> command as shown below.</div><br /><div><pre>$ adb shell dumpsys trust<br />Trust manager state:<br /> User "Owner" (id=0, flags=0x13) (current): trusted=0, trustManaged=1<br /> Enabled agents:<br /> org.nick.ghettounlock/.GhettoTrustAgent<br /> bound=1, connected=1, managingTrust=1, trusted=0<br /> Events:<br /> #0 12-24 10:42:01.915 TrustTimeout: agent=GhettoTrustAgent<br /> #1 12-24 10:42:01.915 TrustTimeout: agent=GhettoTrustAgent<br /> #2 12-24 10:42:01.915 TrustTimeout: agent=GhettoTrustAgent<br /> ...<br /></pre><br /></div><h2>Granting trust</h2><div>Once a trust agent is installed, a trust grant can be triggered by any observable environment event, or directly by the user (for example, by via an authentication challenge). An often <a href="https://www.reddit.com/r/Android/comments/2pyyk5/smartlock_request_wifi_please_googs/">requested</a>, but not particularly secure (unless using a WPA2 profile that authenticates WiFi access points), unlock trigger is connecting to a 'home' WiFi AP. This feature can be easily implemented using a broadcast receiver that reacts to <code>android.net.wifi.STATE_CHANGE</code> (see <a href="https://github.com/nelenkov/ghetto-unlock">sample app</a>; based on the sample in AOSP). Once a 'trusted' SSID is detected, the receiver only needs to call the <code>grantTrust()</code> method of the trust agent service. This can be achieved in a number of ways, but if both the service and the receiver are in the same package, a straightforward way is to use a <code>LocalBroadcastManager</code> (part of the support library) to send a local broadcast, as shown below. </div><br /><div><pre>static void sendGrantTrust(Context context,<br /> String message, <br /> long durationMs, <br /> boolean initiatedByUser) {<br /> Intent intent = new Intent(ACTION_GRANT_TRUST);<br /> intent.putExtra(EXTRA_MESSAGE, message);<br /> intent.putExtra(EXTRA_DURATION, durationMs);<br /> intent.putExtra(EXTRA_INITIATED_BY_USER, initiatedByUser);<br /> LocalBroadcastManager.getInstance(context).sendBroadcast(intent);<br />}<br /><br /><br />// in the receiver<br />@Override<br />public void onReceive(Context context, Intent intent) {<br /> if (WifiManager.NETWORK_STATE_CHANGED_ACTION.equals(intent.getAction())) {<br /> WifiInfo wifiInfo = (WifiInfo) intent<br /> .getParcelableExtra(WifiManager.EXTRA_WIFI_INFO);<br /> <br /> // ...<br /> if (secureSsid.equals(wifiInfo.getSSID())) {<br /> GhettoTrustAgent.sendGrantTrust(context, "GhettoTrustAgent::WiFi",<br /> TRUST_DURATION_5MINS, false);<br /> }<br /> }<br />}<br /></pre><br /></div><br /><div>This will call the <code>TrustAgentServiceCallback</code> installed by the system lockscreen and effectively set a per-user trusted flag. If the flag is true, the lockscreen implementation allows the keyguard to be dismissed without authentication. Once the trust timeout expires, the user must enter their pattern, PIN or password in order to dismiss the keyguard. The current trust state is displayed at the bottom of the keyguard as a padlock icon: when unlocked, the current environment is trusted; when locked, explicit authentication is required. The user can also manually lock the device by pressing the padlock, even if an active trust agent currently has trust.</div><h2>NFC unlock</h2><div>As discussed in a <a href="http://nelenkov.blogspot.jp/2014/03/unlocking-android-using-otp.html">previous post</a>, implementing NFC unlock in previous Android versions was possible, but required some modifications to the system <code>NFCService</code>, because the NFC controller was not polled while the lockscreen is displayed. In order to make implementing NFC unlock possible, Lollipop introduces several hooks into the <code>NFCService</code>, which allow NFC polling on the lockscreen. If a matching tag is discovered, a reference to a live <code>Tag</code> object is passed to interested parties. Let's look into the how this is implementation in a bit more detail.<br /><br />The <code>NFCAdapter</code> class has a couple of new (hidden) methods that allow adding and removing an NFC <i>unlock handler</i> (<code>addNfcUnlockHandler()</code> and&nbsp;<code>removeNfcUnlockHandler()</code>, respectively). An NFC unlock handler is an implementation of the <code>NfcUnlockHandler</code> interface shown below.</div><br /><div><pre>interface NfcUnlockHandler {<br /> public boolean onUnlockAttempted(Tag tag);<br />}<br /></pre><br /></div><div>When registering an unlock handler you must specify not only the <code>NfcUnlockHandler</code> object, but also a list of NFC technologies that should be polled for at the lockscreen. Calling the <code>addNfcUnlockHandler()</code> method requires the <code>WRITE_SECURE_SETTINGS</code> signature permission.</div><br /><div>Multiple unlock handlers can be registered and are tried in turn until one of them returns <code>true</code> from <code>onUnlockAttempted()</code>. This terminates the NFC unlock sequence, but doesn't actually dismiss the keyguard. In order to unlock the device, an NFC unlock handler should work with a trust agent in order to grant trust. Judging from <code>NFCService</code>'s commit log, this appears to be a fairly recent development: initially, the Settings app included functionality to register trusted tags, which would automatically unlock the device (based on the tag's UID), but this functionality was removed in favour of trust agents. </div><br /><div>Unlock handlers can authenticate the scanned NFC tag in a variety of ways, depending on the tag's technology. For passive tags that contain fixed data, authentication typically relies either on the tag's unique ID, or on some shared secret written to the tag. For active tags that can execute code, it can be anything from an <a href="http://nelenkov.blogspot.com/2014/03/unlocking-android-using-otp.html">OTP</a>&nbsp;to full-blown multi-step mutual authentication. However, because NFC communication is not very fast, and most tags have limited processing power, a simple protocol with few roundtrips is preferable. A simple implementation that requires the tag to sign a random value with its RSA private key, and then verifies the signature using the corresponding public key is included in the <a href="https://github.com/nelenkov/ghetto-unlock">sample application</a>. For signature verification to work, the trust agent needs to be initialized with the tag's public key, which in this case is imported via the trust agent's settings activity shown below.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-P_pwIAh1CeQ/VJpbdfAYMFI/AAAAAAAAYgw/9aeexWILhvM/s1600/trusted-pub-key.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-P_pwIAh1CeQ/VJpbdfAYMFI/AAAAAAAAYgw/9aeexWILhvM/s1600/trusted-pub-key.png" height="640" width="384" /></a></div><h2>Smart Lock</h2></div><div>'Smart Lock' is just the marketing name for the <code>GoogleTrustAgent</code> which is included in Google Play Services (<code>com.google.android.gms</code> package), as can be seen from the <code>dumpsys</code> output below.</div><br /><div><pre>$ adb shell dumpsys trust<br />Trust manager state:<br /> User "Owner" (id=0, flags=0x13) (current): trusted=1, trustManaged=1<br /> Enabled agents:<br /> com.google.android.gms/.auth.trustagent.GoogleTrustAgent<br /> bound=1, connected=1, managingTrust=1, trusted=1<br /> message=""<br /><br /></pre><br /></div><br /><div>This trust agent offers several trust triggers: trusted devices, trusted places and a trusted face. Trusted face is just a rebranding of the face unlock method found in previous versions. It uses the same proprietary image recognition technology, but is significantly more usable, because, when enabled, the keyguard continuously scans for a matching face instead of requiring you to stay still while it takes and process your picture. The security level provided also remains the same -- fairly low, as the trusted face setup screen warns. Trusted places is based on the <a href="https://developer.android.com/training/location/geofencing.html">geofencing technology</a>, which has been available in Google Play services for a while. Trusted places use the 'Home' and 'Work' locations associated with your Google account to make setup easier, and also allows for registering a custom place based on the current location or any coordinates selectable via Google Maps. As a helpful popup warns, accuracy cannot be guaranteed, and the trusted place range can be up to 100 meters. In practice, the device can remain unlocked for a while even when this distance is exceeded. <br /><br />Trusted devices supports two different types of devices at the time of this writing: Bluetooth and NFC. The Bluetooth option allows the Android device to remain unlocked while a paired Bluetooth device is in range. This features relies on Bluetooth's built-in security mechanism, and as such its security depends on the paired device. Newer devices, such as Android Wear watches or the Pebble watch, support Secure Simple Pairing (Security Mode 4), which uses Elliptic Curve Diffie-Hellman (ECDH) in order to generate a shared link key. During the paring process, these devices display a 6-digit number based on a hash of both devices' public keys in order to provide device authentication and protect against MiTM attacks (a feature called <i>numeric comparison</i>). However, older wearables (such as the Meta Watch), Bluetooth earphones, and others are also supported. These previous-generation devices only support Standard Pairing, which generates authentication keys based on the device's physical address and a 4-digit PIN, which is usually fixed and set to a well-know value such as '0000' or '1234'. Such devices can be easily impersonated.<br /><br />Google's Smart Lock implementation requires a persistent connection to a trusted device, and trust is revoked once this connection is broken. However, as the introductory screen (see below) warns, Bluetooth range is highly variable and may extend up to 100 meters. Thus while the 'keep device unlocked while connected to trusted watch on wrist' use case makes a lot of sense, in practice the Android device may remain unlocked even when the trusted Bluetooth device (wearable, etc.) is in another room.</div><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-N8PGx7yQS5Q/VJpoe1twGaI/AAAAAAAAYhA/tsW6_68y7KQ/s1600/trusted-device-intro.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-N8PGx7yQS5Q/VJpoe1twGaI/AAAAAAAAYhA/tsW6_68y7KQ/s640/trusted-device-intro.png" /></a></div><br /><div>As discussed earlier, an NFC trusted device can be quite flexible, and has the advantage that, unlike Bluetooth, proximity is well defined (typically not more than 10 centimeters). While Google's Smart Lock seems to support an active NFC device (internally referred to as the 'Precious tag'), no such device has been publicly announced yet. If the Precious is not found, Google's NFC-based trust agent falls back to UID-based authentication by saving the hash of the scanned tag's UID (tag registration screen shown below). For the popular NFC-A tags (most MIFARE variants) this <a href="http://www.mifare.net/files/6213/2453/8738/AN10927.pdf">UID</a> is 4 or 7 bytes long (10-byte UIDs are also theoretically supported). While using the UID for authentication is a fairly wide-spread practice, it was originally intended for anti-collision alone, and not for authentication.&nbsp;4-byte UIDs are not necessarily unique and may collide even on 'official' NXP tags. While the <a href="http://www.mifare.net/files/6213/2453/8738/AN10927.pdf">specification</a> requires 7-byte IDs to be both unique (even across different manufacturers) and read-only, cards with a rewritable UID do exists, so cloning a MIFARE trusted tag is quite possible. Tags can also be emulated with a programmable device such as the <a href="http://www.proxmark.org/">Proxmark III</a>. Therefore, the security level provided by UID-based authentication is not that high.</div><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-6Dgw9XustE4/VJpssw2t50I/AAAAAAAAYhM/5SrTRnVdZKg/s1600/trusted-nfc.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-6Dgw9XustE4/VJpssw2t50I/AAAAAAAAYhM/5SrTRnVdZKg/s640/trusted-nfc.png" /></a></div><h2>Summary</h2><div>Android 5.0 (Lollipop) introduces a new trust framework based on trust agents, which can notify the system when the device is in a trusted environment. As the system lockscreen now listens for trust events, it can change its behaviour based on the trust state of the current user. This makes it easy to augment or replace the traditional pattern/PIN/password user authentication methods by installing &nbsp;trust agents. Trust agent functionality is currently only available to system applications, and Lollipop can only support a single active trust agent. Google Play Services provides several trust triggers (trustlets) under the name 'Smart Lock' via its trust agent. While they can greatly improve device usability, none of the currently available Smart Lock methods are particularly precise or secure, so they should be used with care.</div>Nikolay Elenkovhttps://plus.google.com/117221066931981967754noreply@blogger.com7tag:blogger.com,1999:blog-2873091912851440312.post-5722338993727743222014-10-24T03:08:00.000+09:002014-10-31T14:32:04.099+09:00Android Security Internals is outSome six months after the first early access chapters were <a href="http://nelenkov.blogspot.com/2014/04/android-security-internals.html">announced</a>, my book has now officially been&nbsp;<a href="http://www.nostarch.com/androidsecurity">released</a>. While the final ebook PDF has been available for a few weeks, you can now get all ebook formats (PDF, Mobi and ePub) directly from the publisher, <a href="http://www.nostarch.com/androidsecurity">No Starch Press</a>. Print books are also ready and should start shipping tomorrow (Oct 24th). You can use the code <i>UNDERTHEHOOD</i>&nbsp;when checking out for a <a href="http://www.nostarch.com/androidsecurity">30% discount</a> in the next few days. The book will also be available from&nbsp;<a href="http://shop.oreilly.com/product/9781593275815.do">O'Reilly</a>,&nbsp;<a href="http://www.amazon.com/gp/product/1593275811/">Amazon</a>&nbsp;and other retailers in the coming weeks.<br /><br />This book would not have been possible without the efforts of&nbsp;Bill Pollock and&nbsp;Alison Law from No Starch, who edited, refined and produced my raw writings. <a class="g-profile" href="https://plus.google.com/100226390734369553200" target="_blank">+Kenny Root</a>&nbsp;&nbsp;reviewed all chapters and caught some embarrassing mistakes, all that are left are mine alone.&nbsp;Jorrit “<a href="https://twitter.com/ChainfireXDA">Chainfire</a>” Jongma reviewed my coverage of SuperSU and&nbsp;Jon “<a href="https://twitter.com/TeamAndIRC">jcase</a>” Sawyer contributed the foreword. Once again, a big thanks to everyone involved!<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-8jI8NZT3Mk4/VEjEdcdu-hI/AAAAAAAAXvk/yf7NfsFJ9ik/s1600/ASI_cover-web.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-8jI8NZT3Mk4/VEjEdcdu-hI/AAAAAAAAXvk/yf7NfsFJ9ik/s1600/ASI_cover-web.png" height="400" width="302" /></a></div><h2>About the book</h2><div>The book's purpose and structure have not changed considerably since it was first <a href="http://nelenkov.blogspot.com/2014/04/android-security-internals.html">announced</a>. It walks you through Android's security architecture, starting from the bottom up. It starts with fundamental concepts such as Binder, permissions and code signing, and goes on to describe more specific topics such as cryptographic providers, account management and device administration. The book includes excerpts from core native daemons and platform services, as well as some application-level code samples, so some familiarity with Linux and Android programming is assumed (but not absolutely required).&nbsp;</div><div><h2>Android versions covered</h2></div><div>The book covers Android 4.4, based on the source code publicly released through&nbsp;<a href="https://source.android.com/">AOSP</a>. Android's master branch is also referenced a few times, because master changes are usually a good indicator of the direction future releases will take. Vendor modifications or extensions to Android, as well as &nbsp;device-specific features are not discussed.</div><div><br /></div><div>The first developer preview of Android 5.0 (Lollipop, then known only as 'Android L') was announced shortly after the first draft of this book was finished. This first preview L release included some new security features, such as improvements to <a href="http://nelenkov.blogspot.com/2014/10/revisiting-android-disk-encryption.html">full-disk encryption</a> and device administration, but not all planned features were available (for example, Smart Lock was missing). The final Lollipop developer preview (<a href="https://developer.android.com/preview/index.html">released</a> last week) added those missing features and finalized the public&nbsp;<a href="https://developer.android.com/reference/packages.html">API</a>. The source code for Lollipop is however not yet available, and trying to write an 'internals' book without it would either result in incomplete or speculative coverage, or would turn into an (rather though) exercise in reverse engineering. That is why I've chosen not to cover Android 5.0 in the book at all and it is exclusively focused on Android 4.4 (KitKat).<br /><br />Lollipop is a major release, and as such would require reworking most of the chapters and, of course, adding a lot of new content. This could happen in an updated version of the book at some point. Not to worry though, some of the more interesting new security features will probably get covered right here, on the blog, &nbsp;first.</div><div><br /></div><div>With that out of the way, here is the extended table of contents. You can find the full table of contents on the book's&nbsp;<a href="http://www.nostarch.com/androidsecurity">official page</a>.<br /><br />Update: Chapter 1 is now also freely available on No Starch's site.</div><h2>Table of contents</h2><div>&nbsp;<b>Chapter 1: Android’s Security Model</b></div><div><div><ul><li>Android’s Architecture</li><li>Android’s Security Model</li></ul></div><div><b>Chapter 2: Permissions</b></div><div><ul><li>The Nature of Permissions</li><li>Requesting Permissions</li><li>Permission Management</li><li>Permission Protection Levels</li><li>Permission Assignment</li><li>Permission Enforcement</li><li>System Permissions</li><li>Shared User ID</li><li>Custom Permissions</li><li>Public and Private Components</li><li>Activity and Service Permissions</li><li>Broadcast Permissions</li><li>Content Provider Permissions</li><li>Pending Intents</li></ul></div><div><b>Chapter 3: Package Management</b></div><div><ul><li>Android Application Package Format</li><li>Code signing</li><li>APK Install Process</li><li>Package Verification</li></ul></div><div><b>Chapter 4: User Management</b></div><div><ul><li>Multi-User Support Overview</li><li>Types of Users</li><li>User Management</li><li>User Metadata</li><li>Per-User Application Management</li><li>External Storage</li><li>Other Multi-User Features</li></ul></div><div><b>Chapter 5: Cryptographic Providers</b></div><div><ul><li>JCA Provider Architecture</li><li>JCA Engine Classes</li><li>Android JCA Providers</li><li>Using a Custom Provider</li></ul></div><div><b>Chapter 6: Network Security and PKI</b></div><div><ul><li>PKI and SSL Overview</li><li>JSSE Introduction</li><li>Android JSSE Implementation</li></ul></div><div><b>Chapter 7: Credential Storage</b></div><div><ul><li>VPN and Wi-Fi EAP Credentials</li><li>Credential Storage Implementation</li><li>Public APIs</li></ul></div><div><b>Chapter 8: Online Account Management</b></div><div><ul><li>Android Account Management Overview</li><li>Account Management Implementation</li><li>Google Accounts Support</li></ul></div><div><b>Chapter 9: Enterprise Security</b></div><div><ul><li>Device Administration</li><li>VPN Support</li><li>Wi-Fi EAP</li></ul></div><div><b>Chapter 10: Device Security</b></div><div><ul><li>Controlling OS Boot-Up and Installation</li><li>Verified Boot</li><li>Disk Encryption</li><li>Screen Security</li><li>Secure USB Debugging</li><li>Android Backup</li></ul></div><div><b>Chapter 11: NFC and Secure Elements</b></div><div><ul><li>NFC Overview</li><li>Android NFC Support</li><li>Secure Elements</li><li>Software Card Emulation</li></ul></div><div><b>Chapter 12: SElinux</b></div><div><ul><li>SELinux Introduction</li><li>Android Implementation</li><li>Android 4.4 SELinux Policy</li></ul></div><div><b>Chapter 13: System Updates and Root Access</b></div><div><ul><li>Bootloader</li><li>Recovery</li><li>Root Access</li><li>Root Access on Production Builds</li></ul></div></div>Nikolay Elenkovhttps://plus.google.com/117221066931981967754noreply@blogger.com2tag:blogger.com,1999:blog-2873091912851440312.post-25569278944707317972014-10-06T15:11:00.002+09:002015-02-06T04:45:25.949+09:00Revisiting Android disk encryptionIn iOS 8, Apple has expanded the scope of data encryption and now mixes in the user's passcode with an unextractable hardware UID when deriving an encryption key, <a href="http://blog.cryptographyengineering.com/2014/10/why-cant-apple-decrypt-your-iphone.html">making it harder</a> to extract data from iOS 8 devices. This has been somewhat of a hot topic lately, with opinions ranging from praise for Apple's new focus on serious security, to demands for "golden keys" to mobile devices to be magically conjured up. Naturally, the debate has spread to other OS's, and Google has announced that the upcoming Android L release will also have disk encryption <a href="http://arstechnica.com/gadgets/2014/09/android-l-will-have-device-encryption-on-by-default/">enabled by default</a>. Consequently, <a href="https://security.stackexchange.com/questions/67763/does-android-encryption-really-prevent-law-enforcement-access/68062#68062">questions</a> and <a href="https://security.stackexchange.com/questions/68454/android-l-encryption-vs-ios-8-encryption">speculation</a> about the usefulness and strength of Android's disk encryption have sprung up on multiple forums, so this seems like a good time to take another look at its implementation. While Android L still hasn't been released yet, some of the improvements to disk encryption it introduces are apparent in the preview release, so this post will briefly introduce them as well.<br /><br />This post will focus on the security level of disk encryption, for more details on its integration with the platform, see Chapter 10 of my book -- '<a href="http://www.nostarch.com/androidsecurity">Android Security Internals</a>' (early access full PDF is available now, print books should ship by end of October).<br /><h2>Android 3.0-4.3</h2><div>Full disk encryption (FDE) for Android was introduced in version 3.0 (Honeycomb) and didn't change much until version 4.4 (discussed in the next section). <a href="http://source.android.com/devices/tech/encryption/">Android's FDE</a> uses the <a href="https://code.google.com/p/cryptsetup/wiki/DMCrypt">dm-crypt</a> target of Linux's device mapper framework to implement transparent disk encryption for the <i>userdata</i> (mounted as <code>/data</code>) partition. Once encryption is enabled, all writes to disk automatically encrypt data before committing it to disk and all reads automatically decrypt data before returning it to the calling process. The disk encryption key (128-bit, called the 'master key') is randomly generated and protected by the lockscreen password. Individual disk sectors are encrypted by the master key using AES in CBC mode, with <a href="https://en.wikipedia.org/wiki/Disk_encryption_theory#Encrypted_salt-sector_initialization_vector_.28ESSIV.29">ESSIV:SHA256</a> to derive sector IVs.<br /><br />Android uses a so called 'crypto footer' structure to store encryption parameters. It is very similar to the encrypted partition header used by <a href="http://wiki.cryptsetup.googlecode.com/git/LUKS-standard/on-disk-format.pdf">LUKS</a> (Linux Unified Key Setup), but is simpler and omits several LUKS features. While LUKS supports multiple key slots, allowing for decryption using multiple passphrases, Android's crypto footer only stores a single copy of the encrypted master key and thus supports a single decryption passphrase. Additionally, while LUKS splits the encrypted key in multiple 'stripes' in order to reduce the probability of recovering the full key after it has been deleted from disk, Android has no such feature. Finally, LUKS includes a master key checksum (derived by running the master key through <a href="https://en.wikipedia.org/wiki/PBKDF2">PBKDF2</a>), which allows to check whether the entered passphrase is correct without decrypting any of the disk data. Android's crypto footer doesn't include a master key checksum, so the only way to check whether the entered passphrase is correct is to try and mount the encrypted partition. If the mount succeeds, the passphrase is considered correct.<br /><br />Here's how the crypto footer looks in Android 4.3 (version 1.0).<br /><br /><pre>struct crypt_mnt_ftr {<br /> __le32 magic; <br /> __le16 major_version;<br /> __le16 minor_version;<br /> __le32 ftr_size;<br /> __le32 flags; <br /> __le32 keysize;<br /> __le32 spare1;<br /> __le64 fs_size;<br /> __le32 failed_decrypt_count; <br /> unsigned char crypto_type_name[MAX_CRYPTO_TYPE_NAME_LEN]; <br />};<br /></pre><br />The structure includes the version of the FDE scheme, the key size, some flags and the name of the actual disk encryption cipher mode (<i>aes-cbc-essiv:sha256</i>). The crypto footer is immediately followed by the encrypted key and a 16-bit random salt value. In this initial version, a lot of the parameters are implicit and are therefore not included in the crypto footer. The master key is encrypted using an 128-bit AES key (key encryption key, or KEK) derived from an user-supplied passphrase using 2000 iteration of PBKDF2. The derivation process also generates an IV, which is used to encrypt the master key in CBC mode. When an encrypted device is booted, Android takes the passphrase the user has entered, runs it through PBKDF2, decrypts the encrypted master key and passes it to dm-crypt in order to mount the encrypted <i>userdata</i> partition.<br /><h2>Bruteforcing FDE 1.0</h2>The encryption scheme described in the previous section is considered relatively secure, but because it is implemented entirely in software, it's security depends entirely on the complexity of the disk encryption passphrase. If it is sufficiently long and complex, bruteforcing the encrypted master key could take years. However, because Android has chosen to reuse the losckreen PIN or password (maximum length 16 characters), in practice most people are likely to end up with a relatively short or low-entropy disk encryption password. While the PBKDF2 key derivation algorithm has been designed to work with low-entropy input, and requires considerable computational effort to bruteforce, 2000 iterations are not a significant hurdle even to current off-the-shelf hardware. Let's see how hard it is to bruteforce Android FDE 1.0 in practice.<br /><br />Bruteforcing on the device is obviously impractical due to the limited processing resources of Android devices and the built-in rate limiting after several unsuccessful attempts. A much more practical approach is to obtain a copy of the crypto footer and the encrypted <i>userdata</i> partition and try to guess the passphrase offline, using much more powerful hardware. Obtaining a raw copy of a disk partition is usually not possible on most commercial devices, but can be achieved by booting a specialized data acquisition boot image signed by the device manufacturer, &nbsp;exploiting a flaw in the bootloader that allows unsigned images to be booted (such as <a href="https://www.codeaurora.org/projects/security-advisories/fastboot-boot-command-bypasses-signature-verification-cve-2014-4325">this one</a>), or simply by booting a custom recovery image on devices with an unlocked bootloader (a typical first step to 'rooting').<br /><br />Once the device has been booted, obtaining a copy of the <i>userdata</i> partition is straightforward. The crypto footer however, despite its name, typically resides on a dedicated partition on recent devices. The name of the partition is specified using the <code>encryptable</code> flag in the device's <code>fstab</code> file. For example, on the Galaxy Nexus, the footer is on the <i>metadata</i> partition as shown below.<br /><br /><pre>/dev/block/platform/omap/omap_hsmmc.0/by-name/userdata /data ext4 \<br />noatime,nosuid,nodev,nomblk_io_submit,errors=panic \<br />wait,check,encryptable=/dev/block/platform/omap/omap_hsmmc.0/by-name/metadata<br /></pre><br />Once we know the name of the partition that stores the crypto footer it can be copied simply by using the <code>dd</code> command.<br /><br />Very short passcodes (for example a 4-digit PIN) can be successfully bruteforced using a <a href="https://github.com/santoku/Santoku-Linux/blob/master/tools/android/android_bruteforce_stdcrypto/bruteforce_stdcrypto.py">script</a>&nbsp;(this particular one is included in <a href="https://santoku-linux.com/">Santoku Linux</a>) that runs on a desktop CPU. However, much better performance can be achieved on a GPU, which has been specifically designed to execute multiple tasks in parallel. PBKDF2 is an iterative algorithm based on SHA-1 (SHA-2 can also be used) that requires very little memory for execution and lends itself to paralellization. One GPU-based, high-performance PBKDF2 implementation is found in the popular password recovery tool <a href="https://hashcat.net/oclhashcat/">hashcat</a>. Version 1.30 comes with a built-in Android FDE module, so recovering an Android disk encryption key is as simple as parsing the crypto footer and feeding the encrypted key, salt, and the first several sectors of the encrypted partition to hashcat. As we noted in the previous section, the crypto footer does not include any checksum of the master key, so the only way to check whether the decrypted master key is the correct one is to try to decrypt the disk partition and look for some known data. Because most current Android devices use the ext4 filesystem, hashcat (and other similar tools) <a href="https://hashcat.net/forum/thread-2270.html">look for patterns</a> in the ext4 superblock in order to confirm whether the tried passphrase is correct.<br /><br />The Android FDE input for hashcat includes the salt, encrypted master key and the first 3 sectors of the encrypted partition (which contain a copy of the 1024-byte ext4 superblock). The hashcat input file might look like this (taken from the hashcat <a href="http://hashcat.net/wiki/doku.php?id=example_hashes">example hash</a>):<br /><br /><pre>$fde$16$ca56e82e7b5a9c2fc1e3b5a7d671c2f9$16$7c124af19ac913be0fc137b75a34b20d$eac806ae7277c8d4...<br /></pre><br />On a device that uses a six-digit lockscreen PIN, the PIN, and consequently the FDE master key can be recovered with the following command:<br /><br /><pre>$ cudaHashcat64 -m 8800 -a 3 android43fde.txt ?d?d?d?d?d?d<br />...<br />Session.Name...: cudaHashcat<br />Status.........: Cracked<br />Input.Mode.....: Mask (?d?d?d?d?d?d) [6]<br />Hash.Target....: $fde$16$aca5f840...<br />Hash.Type......: Android FDE<br />Time.Started...: Sun Oct 05 19:06:23 2014 (6 secs)<br />Speed.GPU.#1...: 20629 H/s<br />Recovered......: 1/1 (100.00%) Digests, 1/1 (100.00%) Salts<br />Progress.......: 122880/1000000 (12.29%)<br />Skipped........: 0/122880 (0.00%)<br />Rejected.......: 0/122880 (0.00%)<br />HWMon.GPU.#1...: 0% Util, 48c Temp, N/A Fan<br /><br />Started: Sun Oct 05 19:06:23 2014<br />Stopped: Sun Oct 05 19:06:33 2014<br /></pre><br />Even when run on the GPU of a mobile computer (NVIDIA GeForce 730M), hashcat can achieve more than 20,000 PBKDF2 hashes per second, and recovering a 6 digit PIN takes less than 10 seconds. On the same hardware, a 6-letter (lowercase only) password takes about 4 hours. <br /><br />As you can see, bruteforcing a simple PIN or password is very much feasible, so choosing a strong lockscreen password is vital. Lockscreen password strength can be enforced by installing a <a href="https://developer.android.com/guide/topics/admin/device-admin.html">device administrator</a>&nbsp;that sets password complexity requirements. Alternatively, a dedicated disk encryption password can be set on rooted devices using the <a href="http://nelenkov.blogspot.com/2012/08/changing-androids-disk-encryption.html">shell</a> or a dedicated <a href="https://play.google.com/store/apps/details?id=org.nick.cryptfs.passwdmanager">application</a>. CyanogenMod 11 supports setting a dedicated disk encryption password out of the box, and one can be set via system Settings, as shown below.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-fdc5CX9mG3I/VDILegwTSfI/AAAAAAAAXbo/5KUwlItrY_c/s1600/cm11-fde-password.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-fdc5CX9mG3I/VDILegwTSfI/AAAAAAAAXbo/5KUwlItrY_c/s1600/cm11-fde-password.png" height="640" width="384" /></a></div><h2>Android 4.4</h2></div><div>Android 4.4 adds several improvements to disk encryption, but the most important one is replacing the PBKDF2 key derivation function (KDF) with <a href="https://www.tarsnap.com/scrypt.html">scrypt</a>. scrypt has been specifically designed to be hard to crack on GPUs by requiring a large (and configurable) amount of memory. Because GPUs have a limited amount of memory, executing multiple scrypt tasks in parallel is no longer feasible, and thus cracking scrypt is much slower than PBKDF2 (or similar hash-based KDFs). As part of the upgrade process to 4.4, Android automatically updates the crypto footer to use scrypt and re-encrypts the master key. Thus every device running Android 4.4 (devices using a vendor-proprietary FDE scheme excluded) should have its FDE master key protected using an scrypt-derived key.<br /><br />The Android 4.4 crypto footer looks like this (version 1.2):</div><br /><pre>struct crypt_mnt_ftr {<br /> __le32 magic; <br /> __le16 major_version;<br /> __le16 minor_version;<br /> __le32 ftr_size; <br /> __le32 flags; <br /> __le32 keysize;<br /> __le32 spare1;<br /> __le64 fs_size;<br /> __le32 failed_decrypt_count;<br /> unsigned char crypto_type_name[MAX_CRYPTO_TYPE_NAME_LEN];<br /> __le32 spare2; <br /> unsigned char master_key[MAX_KEY_LEN];<br /> unsigned char salt[SALT_LEN];<br /> __le64 persist_data_offset[2]; <br /> __le32 persist_data_size; <br /> __le8 kdf_type; <br /> /* scrypt parameters. See www.tarsnap.com/scrypt/scrypt.pdf */<br /> __le8 N_factor; /* (1 &lt;&lt; N) */<br /> __le8 r_factor; /* (1 &lt;&lt; r) */<br /> __le8 p_factor; /* (1 &lt;&lt; p) */<br />};<br /></pre><br />As you can see, the footer now includes an explicit <code>kdf_type</code> which specifies the KDF used to derive the master key KEK. The values of the scrypt initialization parameters (N, r and p) are also included. The master key size (128-bit) and disk sector encryption mode (<i>aes-cbc-essiv:sha256</i>) are the same as in 4.3.<br /><br />Bruteforcing the master key now requires parsing the crypto footer, initializing scrypt and generating all target PIN or password combinations. As the 1.2 crypto footer still does not include a master key checksum, checking whether the tried PIN or password is correct again requires looking for known plaintext in the ext4 superblock.<br /><br />While hashcat does support scrypt since version 1.30, it is not much more efficient (and in fact can be slower) than running scrypt on a CPU. Additionally, the Android 4.4 crypto footer format is not supported, so hashcat cannot be used to recover Android 4.4 disk encryption passphrases as is.<br /><br />Instead, the Santoku Linux FDE bruteforcer Python script can be extended to support the 1.2 crypto footer format and the scrypt KDF. A sample (and not particularly efficient) implementation can be found <a href="https://github.com/nelenkov/Santoku-Linux/blob/master/tools/android/android_bruteforce_stdcrypto/bruteforce_stdcrypto.py">here</a>. It might produce the following output when run on a 3.50GHz Intel Core i7 CPU:<br /><br /><pre>$ time python bruteforce_stdcrypto.py header footer 4<br /><br />Android FDE crypto footer<br />-------------------------<br />Magic : 0xD0B5B1C4<br />Major Version : 1<br />Minor Version : 2<br />Footer Size : 192 bytes<br />Flags : 0x00000000<br />Key Size : 128 bits<br />Failed Decrypts: 0<br />Crypto Type : aes-cbc-essiv:sha256<br />Encrypted Key : 0x66C446E04854202F9F43D69878929C4A<br />Salt : 0x3AB4FA74A1D6E87FAFFB74D4BC2D4013<br />KDF : scrypt<br />N_factor : 15 (N=32768)<br />r_factor : 3 (r=8)<br />p_factor : 1 (p=2)<br />-------------------------<br />Trying to Bruteforce Password... please wait<br />Trying: 0000<br />Trying: 0001<br />Trying: 0002<br />Trying: 0003<br />...<br />Trying: 1230<br />Trying: 1231<br />Trying: 1232<br />Trying: 1233<br />Trying: 1234<br />Found PIN!: 1234<br /><br />real 4m43.985s<br />user 4m34.156s<br />sys 0m9.759s<br /></pre><br />As you can see, trying 1200 PIN combinations requires almost 5 minutes, so recovering a simple PIN is no longer instantaneous. That said, cracking a short PIN or password is still very much feasible, so choosing a strong locksreen password (or a dedicated disk encryption password, when possible) is still very important.<br /><h2>Android L</h2><div>A preview release of the upcoming Android version (referred to as 'L') has been available for several months now, so we can observe some of expected changes to disk encryption. If we run the crypto footer obtained from an encrypted Android L device through the script introduced in the previous section, we may get the following output:<br /><br /><pre>$ ./bruteforce_stdcrypto.py header L_footer 4<br /><br />Android FDE crypto footer<br />-------------------------<br />Magic : 0xD0B5B1C4<br />Major Version : 1<br />Minor Version : 3<br />Footer Size : 2288 bytes<br />Flags : 0x00000000<br />Key Size : 128 bits<br />Failed Decrypts: 0<br />Crypto Type : aes-cbc-essiv:sha256<br />Encrypted Key : 0x825F3F10675C6F8B7A6F425599D9ECD7<br />Salt : 0x0B9C7E8EA34417ED7425C3A3CFD2E928<br />KDF : unknown (3)<br />N_factor : 15 (N=32768)<br />r_factor : 3 (r=8)<br />p_factor : 1 (p=2)<br />-------------------------<br />...<br /></pre><br />As you can see above, the crypto footer version has been upped to 1.3, but the disk encryption cipher mode and key size have not changed. However, version 1.3 uses a new, unknown KDF specified with the constant 3 (1 is PBKDF2, 2 is scrypt). Additionally, encrypting a device no longer requires setting a lockscreen PIN or password, which suggests that the master key KEK is no longer directly derived from the lockscreen password. Starting the encryption process produces the following logcat output:<br /><br /><pre>D/QSEECOMAPI: ( 178): QSEECom_start_app sb_length = 0x2000<br />D/QSEECOMAPI: ( 178): App is already loaded QSEE and app id = 1<br />D/QSEECOMAPI: ( 178): QSEECom_shutdown_app <br />D/QSEECOMAPI: ( 178): QSEECom_shutdown_app, app_id = 1<br />...<br />I/Cryptfs ( 178): Using scrypt with keymaster for cryptfs KDF<br />D/QSEECOMAPI: ( 178): QSEECom_start_app sb_length = 0x2000<br />D/QSEECOMAPI: ( 178): App is already loaded QSEE and app id = 1<br />D/QSEECOMAPI: ( 178): QSEECom_shutdown_app <br />D/QSEECOMAPI: ( 178): QSEECom_shutdown_app, app_id = 1<br /></pre><br />As discussed in a previous <a href="http://nelenkov.blogspot.jp/2013/08/credential-storage-enhancements-android-43.html">post</a>, 'QSEE' stands for <a href="https://www.qualcomm.com/products/snapdragon/security">Qualcomm</a> Secure Execution Environment, which is an ARM TrustZone-based implementation of a <a href="http://www.globalplatform.org/mediaguidetee.asp">TEE</a>. QSEE provides the hardware-backed credential store on most devices that use recent Qualcomm SoCs. From the log above, it appears that Android's <a href="http://nelenkov.blogspot.jp/2012/07/jelly-bean-hardware-backed-credential.html">keymaster</a>&nbsp;<a href="https://source.android.com/devices/reference/keymaster_8h.html">HAL</a> module has been extended to store the disk encryption key KEK in hardware-backed storage (Cf. 'Using scrypt with keymaster for cryptfs KDF' in the log above). The log also mentions scrypt, so it is possible that the lockscreen password (if present) along with some key (or seed) stored in the TEE are fed to the KDF to produce the final master key KEK. However, since no source code is currently available, we cannot confirm this. That said, setting an unlock pattern on an encrypted Android L device produces the following output, which suggests that the pattern is indeed used when generating the encryption key:<br /><br /><pre>D/VoldCmdListener( 173): cryptfs changepw pattern {}<br />D/QSEECOMAPI: ( 173): QSEECom_start_app sb_length = 0x2000<br />D/QSEECOMAPI: ( 173): App is already loaded QSEE and app id = 1<br />...<br />D/QSEECOMAPI: ( 173): QSEECom_shutdown_app <br />D/QSEECOMAPI: ( 173): QSEECom_shutdown_app, app_id = 1<br />I/Cryptfs ( 173): Using scrypt with keymaster for cryptfs KDF<br />D/QSEECOMAPI: ( 173): QSEECom_start_app sb_length = 0x2000<br />D/QSEECOMAPI: ( 173): App is already loaded QSEE and app id = 1<br />D/QSEECOMAPI: ( 173): QSEECom_shutdown_app <br />D/QSEECOMAPI: ( 173): QSEECom_shutdown_app, app_id = 1<br />E/VoldConnector( 756): NDC Command {5 cryptfs changepw pattern [scrubbed]} took too long (6210ms)<br /></pre><br />As you can be see in the listing above, the <code>cryptfs changepw</code> command, which is used to send instructions to Android's <code>vold</code> daemon, has been extended to support a pattern, in addition to the previously supported PIN/password. Additionally, the amount of time the password change takes (6 seconds) suggests that the KDF (scrypt) is indeed being executed to generate a new encryption key. Once we've set a lockscreen unlock pattern, booting the device now requires entering the pattern, as can be seen in the screenshot below. Another subtle change introduced in Android L, is that when booting an encrypted device the lockscreen pattern, PIN or password needs to be entered only once (at boot time), and not twice (once more on the lockscreen, after Android boots) as it was in previous versions.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/--uL8H4uJoRI/VDIajvtU6dI/AAAAAAAAXcE/7qcnDLROVmQ/s1600/l-fde-pattern-cropped.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/--uL8H4uJoRI/VDIajvtU6dI/AAAAAAAAXcE/7qcnDLROVmQ/s1600/l-fde-pattern-cropped.jpg" height="400" width="363" /></a></div><br /></div><div>While no definitive details are available, it is fairly certain that (at least on high-end devices), Android's disk encryption key(s) will have some hardware protection in Android L. Assuming that the implementation is similar to that of the hardware-backed credential store, disk encryption keys should be encrypted by an unextractable key encryption key stored in the SoC, so obtaining a copy of the crypto footer and the encrypted <i>userdata</i> partition, and bruteforcing the lockscreen passphrase should no longer be sufficient to decrypt disk contents. Disk encryption in the Android L preview (at least on a Nexus 7 2013) feels significantly faster (encrypting the 16GB data partition takes about 10 minutes), so it is most probably hardware-accelerated as well (or the initial encryption is only encrypting disk blocks that are actually in use, and not every single block as in previous versions). However, it remains to be seen whether high-end Android L devices will include a dedicated crypto co-processor akin to Apple's 'Secure Enclave'. While the current TrustZone-based key protection is much better than the software only implementation found in previous versions, a flaw in the secure TEE OS or any of the trusted TEE applications could lead to extracting hardware-protected keys or otherwise compromising the integrity of the system.<br /><br />Update 2014/11/4: The <a href="http://source.android.com/devices/tech/encryption/index.html">official documentation</a> about disk encryption has been updated, including details about KEK protection. Quote:<br /><blockquote class="tr_bq">The encrypted key is stored in the crypto metadata. Hardware backing is implemented by using Trusted Execution Environment’s (TEE) signing capability. Previously, we encrypted the master key with a key generated by applying scrypt to the user's password and the stored salt. In order to make the key resilient against off-box attacks, we extend this algorithm by signing the resultant key with a stored TEE key. The resultant signature is then turned into an appropriate length key by one more application of scrypt. This key is then used to encrypt and decrypt the master key. To store this key:</blockquote><blockquote class="tr_bq"><ol><li>Generate random 16-byte disk encryption key (DEK) and 16-byte salt.</li><li>Apply scrypt to the user password and the salt to produce 32-byte intermediate key 1 (IK1).</li><li>Pad IK1 with zero bytes to the size of the hardware-bound private key (HBK). Specifically, we pad as: 00 || IK1 || 00..00; one zero byte, 32 IK1 bytes, 223 zero bytes.</li><li>Sign padded IK1 with HBK to produce 256-byte IK2.</li><li>Apply scrypt to IK2 and salt (same salt as step 2) to produce 32-byte IK3.</li><li>Use the first 16 bytes of IK3 as KEK and the last 16 bytes as IV.</li><li>Encrypt DEK with AES_CBC, with key KEK, and initialization vector IV.</li></ol></blockquote>Here's a diagram that visualizes this process:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-rBC5kOREgdI/VNPINZUg2xI/AAAAAAAAZTw/a5YvKM3i4BY/s1600/lollipop-dek-encryption.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-rBC5kOREgdI/VNPINZUg2xI/AAAAAAAAZTw/a5YvKM3i4BY/s1600/lollipop-dek-encryption.png" height="452" width="640" /></a></div><h2>&nbsp;Summary</h2></div><div>Android has included full disk encryption (FDE) support since version 3.0, but versions prior to 4.4 used a fairly easy to bruteforce key derivation function (PBKDF2 with 2000 iterations). Additionally, because the disk encryption password is the same as the lockscreen one, most users tend to use simple PINs or passwords (unless a device administrator enforces password complexity rules), which further facilitates bruteforcing. Android 4.4 replaced the disk encryption KDF with scrypt, which is much harder to crack and cannot be implemented efficiently on off-the-shelf GPU hardware. In addition to enabling FDE out of the box, Android L is expected to include hardware protection for disk encryption keys, as well as &nbsp;hardware acceleration for encrypted disk access. These two features should make FDE on Android both more secure and much faster.<br /><br />[Note that the discussion in this post is based on "stock Android" as released by Google (references source code is from&nbsp;<a href="https://source.android.com/">AOSP</a>). Some device vendors implement slightly different encryption schemes, and hardware-backed key storage and/or hardware acceleration are already available via vendor extensions on some high-end devices.]</div><div></div><div></div>Nikolay Elenkovhttps://plus.google.com/117221066931981967754noreply@blogger.com26tag:blogger.com,1999:blog-2873091912851440312.post-59453950401784772562014-07-23T16:27:00.000+09:002014-07-23T16:46:00.203+09:00Secure voice communication on AndroidWhile the topic of secure voice communication on mobile is hardly new, it has been getting a lot of media attention following the the official release of the <a href="https://www.blackphone.ch/">Blackphone</a>, Consequently, this is a good time to go back to basics and look into how secure voice communication is typically implemented. While this post focuses on Android, most of the discussion applies to other platforms too, with only the mobile clients presented being Android specific.<br /><h2>Voice over IP</h2><div>Modern mobile networks already encrypt phone calls, so voice communication is secure by default, right? As it turns out, the original GSM encryption protocol (A5/1) is quite weak and can be attacked with readily available <a href="https://srlabs.de/decrypting_gsm/">hardware and software</a>. The somewhat more modern alternative (A5/3) is also not without flaws, and in addition its adoption has been <a href="https://srlabs.de/gsmmap/">fairly slow</a>, especially in some parts of the <a href="http://gsmmap.org/">world</a>. Finally, mobile networks depend on a shared key, which while protected by hardware (UICC/SIM card) on mobile phones, can be obtained from MNOs (via legal or other means) and used to enable call interception and decryption.<br /><br /></div><div></div><div>So what's the alternative? Short of building your own cellular network, the alternative is to use the data connectivity of the device to transmit and receive voice. This strategy is known as Voice over IP (<a href="https://en.wikipedia.org/wiki/Voice_over_ip">VoIP</a>) and has been around for a while, but the data speeds offered by mobile networks have only recently reached levels that make it practical on mobiles.<br /><h3>Session Initiation Protocol</h3></div><div></div><div>Different technologies and standards that enable VoIP are available, but by far the most widely adopted one relies on the Session Initiation Protocol (<a href="https://en.wikipedia.org/wiki/Session_Initiation_Protocol">SIP</a>). As the name implies, SIP is a signalling protocol, whose purpose is establish a media session between endpoints. A session is established by discovering the remote endpoint(s), negotiating a media path and codec, and establishing one or more media streams between the endpoints. Media negotiation is achieved with the help of the Session Description Protocol (<a href="http://tools.ietf.org/html/rfc4566.html">SDP</a>), and typically transmitted using the Real-time Transport Protocol (<a href="http://www.ietf.org/rfc/rfc1889.txt">RTP</a>). While a SIP client, or more correctly a user agent (UA), can connect directly to a peer, peer discovery usually makes use of one or more well-known registrars. A registrar is a SIP endpoint (server) which accepts <code>REGISTER</code> requests from a set of clients in the domain(s) it is responsible for, and offers a location services to interested parties, much like DNS. Registration is dynamic and temporary: each client registers its SIP URI and IP address with the registrar, thus making it possible for other peers to discover it for the duration of the registration period. The SIP URI can contain arbitrary alphanumeric characters (much like an email address), but the username part is typically limited to numbers for backward compatibility with existing networks and devices (e.g., <code> sip:0123456789@mydomain.org</code>).</div><br />A SIP call is initiated by a UA sending an <code>INVITE</code> message specifying the target peer, which might be mediated by multiple SIP 'servers' (registrars and/or proxies). Once a media path has been negotiated, the two endpoints (Phone A and Phone B in the figure below) might communicate directly (as shown in the figure) or via a one or more media proxies which help bridge SIP clients that don't have a publicly routable IP address (such as those behind NAT), implement conferencing, etc.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-Eh56jh2pavg/U84e6PKRN9I/AAAAAAAAVxA/wGGCB5CxnQw/s1600/sip-session.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-Eh56jh2pavg/U84e6PKRN9I/AAAAAAAAVxA/wGGCB5CxnQw/s1600/sip-session.png" height="392" width="640" /></a></div><br /><h3>SIP on mobiles</h3>Because SIP calls are ultimately routed using the registered IP address of the target peer, arguably SIP is not very well suited for mobile clients. In order to receive calls, clients need to remain online even when not actively used and keep a constant IP address for fairly long periods of time. Additionally, because public IP addresses are rarely assigned to mobile clients, &nbsp;establishing a direct media channel between two mobile peers can be challenging. The online presence problem is typically solved by using a complementary, low-overhead signalling mechanism such as Google Cloud Messaging (GCM) for Android in order to "wake up" the phone before it can receive a call. The requirement for a stable IP address is typically handled by shorter registration times and triggering registration each time the connectivity of the device changes (e.g., from going from LTE to WiFi). The lack of a public IP address is usually overcome by using various supporting methods, ranging from querying STUN servers to discover the external public IP address of a peer, to media proxy servers which bridge connections between heavily NAT-ed clients. By combining these and other techniques, a well-implemented SIP client can offer an alternative voice communication channel on a mobile phone, while integrating with the OS and keeping resource usage fairly low.<br /><br />Most Android devices have included a built-in SIP client as part of the framework since version 2.3 in the <a href="https://developer.android.com/reference/android/net/sip/package-summary.html"><code>android.net.sip package</code></a>. However, the interface offered by this package is very high level, offers few options and does not really support extension or customization. Additionally, it hasn't received any new features since the initial release, and, most importantly, is optional and therefore unavailable on some devices. For this reason, most popular SIP clients for Android are implemented using third party libraries such as&nbsp;<a href="http://www.pjsip.org/">PJSIP</a>, which support advanced SIP features and offer a more flexible interface.<br /><h2>Securing SIP</h2><div>As mentioned above, SIP is a signalling protocol. As such, it does not carry any voice data, only information related to setting up media channels. A SIP session includes information about each of the peers and any intermediate servers, including IP addresses, supported codecs, user agent strings, etc. Therefore, even if the media channel is encrypted, and the contents of a voice call cannot be easily recovered, the information contained in the accompanying SIP messages -- who called whom, where the call originated from and when, can be equally important or damaging. Additionally, as we'll show in the next section, SIP can be used to negotiate keys for media channel encryption, in which case intercepting SIP messages can lead to recovering plaintext voice data.<br /><br />SIP is a transport-independent text-based protocol, similar to HTTP, which is typically transmitted over UDP. When transmitted over an unencrypted channel, it can easily be intercepted using standard packet capture software or dumped to a log file at any of the intermediate nodes a SIP message traverses before reaching its destination. Multiple tools that can automatically correlate SIP messages with the associated media streams are readily available. This lack of inherent security features requires that SIP be secured by protecting the underlying transport channel.<br /><h3>VPN</h3>A straightforward method to secure SIP is to use a VPN to connect peers. Because most VPNs support encryption, signalling, as well as media streams tunneled through the VPN are automatically protected. As an added benefit, using a VPN can solve the NAT problem by offering directly routable private addresses to peers. Using a VPN works well for securing VoIP trunks between SIP servers which are linked using a persistent, low-latency and high-bandwidth connection. However, the overhead of a VPN connection on mobile devices can be too great to sustain a voice channel of even average quality. Additionally, using a VPNs can result in highly variable latency (jitter), which can deteriorate voice quality even if jitter buffers are used. That said, many Android SIP clients can be setup to automatically use a VPN if available. The underlying VPN used can be anything supported on Android, for example the built-in IPSec VPN or a third-party VPN such as <a href="https://play.google.com/store/apps/details?id=de.blinkt.openvpn">OpenVPN</a>. However, even if a VPN provides tolerable voice quality, typically it only ensures an encrypted tunnel to a SIP proxy, and there are no guarantees that any SIP messages or voice streams that leave the proxy are encrypted. That said, a VPN can be a usable solution, if all calls are terminated within a trusted private network (such as a corporate network).<br /><h3>Secure SIP</h3>Because SIP is transport-independent it can be transmitted over any supported protocol, including a connection-oriented one such as TCP. When using TCP, a secure channel between SIP peers can be established with the help of the standard TLS protocol. Peer authentication is handled in the usual manner -- using PKI certificates, which allow for mutual authentication. However, because a SIP message typically traverses multiple servers until it reaches its final destination, there is no guarantee that the message will be always encrypted. In other words, SIP-over-TLS, or secure SIP, does not provide end-to-end security but only hop-to-hop security.<br /><br />SIP-over-TLS is relatively well supported by all major SIP servers, including open source once like <a href="http://www.asterisk.org/">Asterisk</a> and <a href="http://freeswitch.org/">FreeSWITCH</a>. For example, enabling SIP-over-TLS in Asterisk requires generating a key and certificate, configuring a few global tls options, and finally requiring peers to use TLS when connecting to the server as described <a href="https://wiki.asterisk.org/wiki/display/AST/Secure+Calling+Tutorial">here</a>. However, Asterisk does not currently support client authentication for SIP clients (although there is some limited support for client authentication on trunk lines).<br /><br />Most popular Android clients support using the TLS transport for SIP, with some limitations. For example the popular open source <a href="https://code.google.com/p/csipsimple/">CSipSimple</a> client supports TLS, but only version 1.0 (as well as SSL v2/v3). Additionally, it does not use Android's built-in certificate and key stores, but requires certificates to be saved on external storage in PEM format. Both limitations are due to the underlying PJSIP library, which is built using OpenSSL and requires keys and certificates to be stored as files in OpenSSL's native format. Additionally, server identity is not checked by default and the check needs to be explicitly enabled in order for server identity to be verified, as shown in the screenshot below.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-sQ9W-P7KYSQ/U86IUjbs5DI/AAAAAAAAVxQ/pYveG1eAobk/s1600/csipsimple-tls.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-sQ9W-P7KYSQ/U86IUjbs5DI/AAAAAAAAVxQ/pYveG1eAobk/s1600/csipsimple-tls.png" height="640" width="360" /></a></div><br /></div><div>Another popular VoIP client, <a href="https://play.google.com/store/apps/details?id=com.zoiper.android.app">Zoiper</a>, doesn't use a pre-initialized trust store at all, but requires peer certificates to be manually confirmed and cached for each SIP server. The commercial <a href="https://play.google.com/store/apps/details?id=com.bria.voip">Bria Android</a> client (by CounterPath) does use the system trust store and automatically verifies peer identity.<br /><br />When a secure SIP connection to a peer is established, VoIP clients indicate this on the call setup and call screens as shown in the CSipSimple screenshot below.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-kSdjVeOF2Us/U86KGoYLaLI/AAAAAAAAVxc/bPOibexwaI4/s1600/srtp.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-kSdjVeOF2Us/U86KGoYLaLI/AAAAAAAAVxc/bPOibexwaI4/s1600/srtp.png" height="640" width="360" /></a></div><br /><h3>SIP Alternatives</h3><div>While SIP is a widely adopted standard, it is also quite complex and supports many extensions that are not particularly useful in a mobile environment. Instead of SIP the <a href="https://play.google.com/store/apps/details?id=org.thoughtcrime.redphone">RedPhone</a> secure VoIP client uses a simple custom <a href="https://github.com/WhisperSystems/RedPhone/wiki/Signaling-Protocol">signalling protocol</a> based on a RESTful HTTP (with some additional verbs) API. The protocol is secured using TLS with server certificates issued by a private CA, which RedPhone clients implicitly trust. &nbsp;</div><h2>Securing the media channel</h2></div><div>As mentioned in our brief SIP introduction, &nbsp;the media channel between peers is usually implemented using the RTP protocol. Because the media channel is completely separated from SIP, even if all signalling is carried out over TLS, media streams are unprotected by default. RTP streams can be secured using the Secure RTP (<a href="http://tools.ietf.org/html/rfc3711">SRTP)</a>&nbsp;profile of the RTP protocol. SRTP is designed to provide&nbsp;<span style="font-size: 1em;">confidentiality,</span>&nbsp;message authentication, and replay protection to the underlying RTP streams, as well as to the supporting RTCP protocol. SRTP uses a symmetric cipher, typically AES in counter mode, to provide confidentiality and a message authentication code (MAC), typically HMAC-SHA1, to provide packet integrity. Replay protection is implemented by maintaining a replay list which received packets are checked against to detect possible replay.<br /><br />When a voice channel is encrypted using SRTP the transmitted data looks like random noise (as any encrypted data should), as shown below.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-hx9VkyDX6-M/U88Xf3ESzgI/AAAAAAAAVxs/TaitKEuExQw/s1600/srtp-audio.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-hx9VkyDX6-M/U88Xf3ESzgI/AAAAAAAAVxs/TaitKEuExQw/s1600/srtp-audio.png" height="83" width="640" /></a></div><br /><br />SRTP defines a&nbsp;pseudo-random function (PRF) which is used to derive the session keys (used for encryption and authentication) from a master key and master salt. What SRTP does not specify is how the master key and salt should be obtained or exchanged between peers.<br /><h3>SDES</h3><div>SDP Security Descriptions for Media Streams (<a href="http://tools.ietf.org/html/rfc4568">SDES</a>) is an extension to the SDP protocol which adds a media attribute that can be used to negotiate a key and other cryptographic parameters for SRTP. The attribute is simply called <code>crypto</code> and can contain a crypto suite, key parameters, and, optionally, session parameters. A <code>crypto</code> attribute which includes a crypto suite and key parameters might look like this:<br /><br /><pre>a=crypto:1 AES_CM_128_HMAC_SHA1_80 inline:VozD8O2kcDFeclWMjBOwvVxN0Bbobh3I6/oxWYye</pre><br />Here <code>AES_CM_128_HMAC_SHA1_80</code> is a crypto suite which uses AES in counter mode with an 128-bit key for encryption and produces an 80-bit SRTP authentication tag using HMAC-SHA1. The Base64-encoded value that follows the crypto suite string contains the master key (128 bits)&nbsp;concatenated&nbsp;with the master salt (112 bits) which are used to derive SRTP session keys.<br /><br />SDES does not provide any protection or authentication of the cryptographic parameters it includes, and is therefore only secure when used in combination with SIP-over-TLS (or another secure signalling transport). SDES is widely supported by both SIP servers, hardware SIP phones and software clients. For example, in Asterisk enabling SDES and SRTP is as simple as adding <code>encryption=yes</code> to the peer definition. Most Android SIP clients support SDES and can automatically enable SRTP for the media channel when the <code>INVITE</code> SIP message includes the <code>crypto</code> attribute. For example, in the CSipSimple screenshot above the master key for SRTP was received via SDES.<br /><br />The main advantage of SDES is its simplicity. However it requires that all intermediate servers are trusted, because they have access to the SDP data that includes the master key. Even though the SRTP media stream might be transmitted directly between two peers, SRTP effectively provides only hop-to-hop security, because compromising any of the intermediate SIP servers can result in recovering the master key and eventually session keys. For example, if the private key of a SIP server involved in SDES key exchange is compromised, and the TLS session that carried SIP messages session did not use forward secrecy, the master key can easily be extracted from a packet capture using Wireshark, as shown below.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-kQqWsTW65Ng/U88ZPvfSkxI/AAAAAAAAVx4/Fq5gLo0XKMU/s1600/sdes-keys.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-kQqWsTW65Ng/U88ZPvfSkxI/AAAAAAAAVx4/Fq5gLo0XKMU/s1600/sdes-keys.png" height="122" width="640" /></a></div><br /><h3>ZRTP</h3></div><a href="http://tools.ietf.org/html/rfc6189">ZRTP</a> aims to provide end-to-end security for SRTP media streams by using the media channel to negotiate encryption keys directly between peers. It is essentially a key agreement protocol based on a Diffie-Hellman with added&nbsp;Man-in-the-Middle (MiTM) protections. MiTM protection relies on the so called "short authentication strings" (SAS), which are derived from the session key and are displayed to each calling party. The parties need to confirm that they see the same SAS by reading it to each other over the phone. As an additional MiTM protection, ZRTP uses a form of key continuity, which mixes in previously negotiated key material into the shared secret obtained using Diffie-Hellman when deriving session keys. Thus ZRTP does not require a secure signalling channel or a PKI in order to establish a SRTP session key or protect against MiTM attacks.<br /><br />On Android, ZRTP is supported both by VoIP clients for dedicated services such as <a href="https://play.google.com/store/apps/details?id=org.thoughtcrime.redphone">RedPhone</a> and <a href="https://play.google.com/store/apps/details?id=com.silentcircle.silentphone">Silent Phone</a>, and by general-purpose SIP clients like CSipSimple. On the server side, ZRTP is supported by both <a href="http://freeswitch.org/">FreeSWITCH</a>&nbsp;and&nbsp;<a href="http://www.kamailio.org/w/">Kamailio</a>&nbsp;(but not by Asterisk), so it its fairly easy to set up a test server and test ZRTP support on Android.<br /><br />ZRTP support in CSipSimple can be configured on a per account basis by setting the &nbsp;ZRTP mode option to "Create ZRTP". It must be noted however, that ZRTP encryption is opportunistic and will fall back to cleartext communication if the remote peer does not support ZRTP. When the remote party does support ZRTP, CSipSimple shows an SAS confirmation dialog only the first time you connect to a particular peer and then displays the SAS and encryption scheme in the call dialog as shown below.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-HkWAAkfkgkw/U88n_JZKbwI/AAAAAAAAVyI/cXki9vXOezE/s1600/zrtp-e2e.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-HkWAAkfkgkw/U88n_JZKbwI/AAAAAAAAVyI/cXki9vXOezE/s1600/zrtp-e2e.png" height="640" width="360" /></a></div><br />In this case, the voice channel is direct and ZRTP/SRTP provide end-to-end security. However, the SIP proxy server can also establish a separate ZRTP/SRTP channel with each party and proxy the media streams. In this case, the intermediate server has access to unencrypted media streams and the provided security is only hop-to-hop, as when using SDES. For example, when FreeSWITCH establishes a separate media channel with two parties that use ZRTP, CSipSimple will display the following dialog, and the SAS values at both clients won't match because each client uses a separate session key. Unfortunately, this is not immediately apparent to end users which may not be familiar with the meaning of the "EndAtMitM" string that signifies this.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-v73YN1gsru8/U88qakHydUI/AAAAAAAAVyU/dv1Ep3WrSHs/s1600/zrtp.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-v73YN1gsru8/U88qakHydUI/AAAAAAAAVyU/dv1Ep3WrSHs/s1600/zrtp.png" height="640" width="360" /></a></div><br />The ZRTP protocol supports a "trusted MiTM" mode which allows clients to verify the intermediate server after completing a key enrollment procedure which establishes a shared key between the client and a particular server. This features is supported by FreeSWITCH, but not by common Android clients, including CSipSimple.<br /><h2>Summary</h2></div><div>Android supports the SIP protocol natively, but the provided APIs are restrictive and do not support advanced VoIP features such as media channel encryption. Most major SIP client apps support voice encryption using SRTP and either SDES or ZRTP for key negotiation. Popular open source SIP severs such as Asterisk and FreeSWITCH also support SRTP, SDES, and ZRTP and make it fairly easy to build a small scale secure VoIP network that can be used by Android clients. Hopefully, the Android framework will be extended to include the features required to implement secure voice communication without using third party libraries, and integrate any such features with other security services provided by the platform.</div>Nikolay Elenkovhttps://plus.google.com/117221066931981967754noreply@blogger.com0tag:blogger.com,1999:blog-2873091912851440312.post-56780410595831428752014-05-08T23:00:00.001+09:002014-06-11T14:03:07.764+09:00Using KitKat verified bootAndroid 4.4 introduced a number of <a href="http://source.android.com/devices/tech/security/enhancements44.html" target="_blank">security enhancements</a>, most notably SELinux in enforcing mode. One security feature that initially got some press attention, because it was presumably aiming to 'end all custom firmware', but hasn't been described in much detail, is <a href="https://source.android.com/devices/tech/security/dm-verity.html" target="_blank">verified boot</a>. This post will briefly explain how verified boot works and then show how to configure and enable it on a Nexus device.<br /><div><h2>Verified boot with dm-verity</h2></div><div>Android's verified boot implementation is based on the <a href="https://code.google.com/p/cryptsetup/wiki/DMVerity" target="_blank">dm-verity</a> device-mapper block integrity checking target. <a href="https://www.sourceware.org/dm/" target="_blank">Device-mapper</a> is a Linux kernel framework that provides a generic way to implement virtual block devices. It is used to implement volume management (<a href="https://sourceware.org/lvm2/" target="_blank">LVM</a>), full-disk encryption (<a href="https://code.google.com/p/cryptsetup/wiki/DMCrypt" target="_blank">dm-crypt</a>), RAIDs and even distributed replicated storage (<a href="http://www.drbd.org/" target="_blank">DRBD</a>). Device-mapper works by essentially mapping a virtual block device to one or more physical block devices, optionally modifying transferred data in transit. For example, dm-crypt decrypts read physical blocks and encrypts written blocks before committing them to disk. Thus disk encryption is transparent to users of the virtual dm-crypt block device. Device-mapper targets can be stacked on top of each other, making it possible to implement complex data transformations.&nbsp;</div><div><br /></div><div>As we mentioned, dm-verity is a block integrity checking target. What this means is that it transparently verifies the integrity of each device block as it is being read from disk. If the block checks out, the read succeeds, and if not -- the read generates an I/O error as if the block was physically corrupt. Under the hood dm-verity is implemented using a pre-calculated hash tree which includes the hashes of all device blocks. The leaf nodes of the tree include hashes of physical device blocks, while intermediate nodes are hashes of their child nodes (hashes of hashes). The root node is called the <i>root hash</i> and is based on all hashes in lower levels (see figure below). Thus a change even in a single device block will result in a change of the root hash. Therefore in order to verify a hash tree we only need to verify its root hash. At runtime dm-verity calculates the hash of each block when it is read and verifies it using the pre-calculated hash tree. Since reading data from a physical device is already a time consuming operation, the latency added by hashing and verification as relatively low.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-WKt3LLuVHc4/U2uP2nCO8QI/AAAAAAAAUdM/EwLsbj8JVXA/s1600/dm-verity-hash-table.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-WKt3LLuVHc4/U2uP2nCO8QI/AAAAAAAAUdM/EwLsbj8JVXA/s1600/dm-verity-hash-table.png" height="321" width="640" /></a></div><span style="font-size: x-small;">[Image from Android dm-verity <a href="https://source.android.com/devices/tech/security/dm-verity.html">documentation</a>,&nbsp; licensed under Creative Commons Attribution 2.5]</span><br /><br /></div><div></div><div>Because dm-verity depends on a pre-calculated hash tree over all blocks of a device, the underlying device needs to be mounted read-only for verification to be possible. Most filesystems record mount times in their superblock or similar metadata, so even if no files are changed at runtime, block integrity checks will fail if the underlying block device is mounted read-write. This can be seen as a limitation, but it works well for devices or partitions that hold system files, which are only changed by OS updates. Any other change indicates either OS or disk corruption, or a malicious program that is trying to modify the OS or masquerade as a system file. dm-verity's read-only requirement also fits well with Android's security model, which only hosts application data on a read-write partition, and keeps OS files on the read-only <i>system</i> partition.</div><h2>Android implementation</h2><div>dm-verity was originally developed in order to implement verified boot in&nbsp;<a href="http://www.chromium.org/chromium-os/chromiumos-design-docs/verified-boot" target="_blank">Chrome OS</a>, and was integrated into the Linux kernel in version 3.4. It is enabled with the <code>CONFIG_DM_VERITY</code> kernel configuration item. Like Chrome OS, Android 4.4 also uses the kernel's dm-verity target, but the cryptographic verification of the root hash and mounting of verified partitions are implemented differently from Chrome OS.<br /><br />The RSA public key used for verification is embedded in the boot partition under the&nbsp;<i>verity_key</i> filename and is used to verify the dm-verity mapping table. This mapping table holds the locations of the target device and the offset of the hash table, as well as the root hash and salt. The mapping table and its signature are part of the verity metablock which is written to disk directly after the last filesystem block of the target device. A partition is marked as verifiable by adding the <i>verify</i> flag to the Android-specific <i>fs_mgr flags</i> filed of the device's <i>fstab</i> file. When Android's filesystem manager encounters the <i>verify</i> flag in <i>fstab</i>, it loads the verity metadata from the block device specified in <i>fstab</i> and verifies its signature using the <i>verity_key</i>. If the signature check succeeds, the filesystem manager parses the dm-verity mapping table and passes it to the Linux device-mapper, which use the information contained in the mapping table in order to create a virtual dm-verity block device. This virtual block device is then mounted at the mount point specified in <i>fstab</i> in place&nbsp;of the corresponding physical device. As a result, all reads from the underlying physical device are transparently verified against the pre-generated hash tree. Modifying or adding files, or even remounting the partition in read-write mode, results in an integrity verification failure and an I/O error.<br /><br />We must note that as dm-verity is a kernel feature, in order for the integrity protection it provides to be effective, the kernel the device boots needs to be trusted. On Android, this means verifying the <i>boot</i> partition, which also includes the root filesystem RAM disk (initrd) and the verity public key. This process is device-specific and is typically implemented in the device bootloader, usually by using an unmodifiable verification key stored in hardware to verify the boot partition's signature.<br /><h2>Enabling verified boot</h2></div><div>The <a href="https://source.android.com/devices/tech/security/dm-verity.html" target="_blank">official documentation</a> describes the steps required to enable verified boot on Android, but lacks concrete information about the actual tools and commands that are needed. In this section we show the commands required to create and sign a dm-verity hash table and demonstrate how to configure an Android device to use it. Here is a summary of the required steps:&nbsp;</div><div><ol><li>Generate a hash tree for that <i>system</i> partition.</li><li>Build a dm-verity table for that hash tree.</li><li>Sign that dm-verity table to produce a table signature.</li><li>Bundle the table signature and dm-verity table into verity metadata.</li><li>Write the verity metadata and the hash tree to the <i>system</i> parition.</li><li>Enable verified boot in the devices's <i>fstab</i> file.</li></ol><div>As we mentioned earlier, dm-verity can only be used with a device or partition that is mounted read-only at runtime, such as Android's <i>system</i> partition. While verified boot can be applied to other read-only partition's such as those hosting proprietary firmware blobs, this example uses the <i>system</i> partition, as protecting OS files results in considerable device security benefits.&nbsp;</div></div><div><br /></div><div>A dm-verity hash tree is generated with the&nbsp;dedicated <i>veritysetup</i> program. <i>veritysetup</i>&nbsp;can operate directly on block devices or use filesystem images and write the hash table to a file. It is supposed to produce platform-independent output, but hash tables produced on desktop Linux didn't quite agree with Android, so for this example we'll generate the hash tree directly on the device. To do this we first need to compile <i>veritysetup</i> for Android. A project that generates a statically linked <i>veritysetup</i> binary is provided on <a href="https://github.com/nelenkov/cryptsetup" target="_blank">Github</a>. It uses the OpenSSL backend for hash calculations and has only been slightly modified (in a not too portable way...), to allow for the different size of the <code>off_t</code> data type, which is 32-bit in current versions of Android's bionic library.&nbsp;</div><div><br /></div><div>In order to add the hash tree directly to the system partition, we first need to make sure that there is enough space to hold the hash tree and the verity metadata block (32k) after the last filesystem block. As most devices typically use the whole <i>system</i> partition, you may need to modify the <code>BOARD_SYSTEMIMAGE_PARTITION_SIZE</code> value in your device's <code><i>BoardConfig.mk</i></code> to allow for storing verity data. After you have adjusted the size of the <i>system</i> partition, transfer the <i>veritysetup</i> binary to the <i>cache</i> or <i>data</i> partitions of the device, and boot a recovery that allows root shell access over ADB. To generate and write the hash tree to the device we use the <i>veritysetup format</i> command as shown below.<br /><br /><pre># veritysetup --debug --hash-offset 838893568 --data-blocks 204800 format \<br />/dev/block/mmcblk0p21 /dev/block/mmcblk0p21<br />...<br /># Updating VERITY header of size 512 on device /dev/block/mmcblk0p21, offset 838893568.<br />VERITY header information for /dev/block/mmcblk0p21<br />UUID: 0dd970aa-3150-4c68-abcd-0b8286e6000<br />Hash type: 1<br />Data blocks: 204800<br />Data block size: 4096<br />Hash block size: 4096<br />Hash algorithm: sha256<br />Salt: 1f951588516c7e3eec3ba10796aa17935c0c917475f8992353ef2ba5c3f47bcb<br />Root hash: 5f061f591b51bf541ab9d89652ec543ba253f2ed9c8521ac61f1208267c3bfb1<br /></pre><br />This example was executed on a Nexus 4, make sure you use the correct block device for your phone instead of <i>/dev/block/mmcblk0p21</i>. The <i>--hash-offset</i> parameter is needed because we are writing the hash tree to the same device that holds filesystem data. It is specified in bytes (not blocks) and needs to point to a location after the verity metadata block. Adjust according to your filesystem size so that hash_offset &gt; filesystem_size + 32k. The next parameter, <i>--data-blocks</i>, specifies the number of blocks used by the filesystem. The default block size is 4096, but you can specify a different size using the <i>--data-block-size</i> parameter. This value needs to match the size allocated to the filesystem with&nbsp;<code>BOARD_SYSTEMIMAGE_PARTITION_SIZE</code>. If the command succeeds it will output the calculated root hash and the salt value used, as shown above. Everything but the root hash is saved in the superblock (first block) of the hash table. Make sure you save the root hash, as it is required to complete the verity setup.<br /><br />Once you have the root hash and salt, you can generate and sign the dm-verity table. The table is a single line that contains the name of the block device, block sizes, offsets, salt and root hash values. You can use the <i><a href="https://github.com/nelenkov/verity/blob/master/gentable.py">gentable.py</a></i>&nbsp;script (edit constant values accordingly first) to generate it or write it manually based on the output of <i>veritysetup</i>. See dm-verity's <a href="https://code.google.com/p/cryptsetup/wiki/DMVerity" target="_blank">documentation</a> for details about the format. For our example it looks like this (single line, split for readability):<br /><br /><pre>1 /dev/block/mmcblk0p21 /dev/block/mmcblk0p21 4096 4096 204800 204809 sha256 \<br />1f951588516c7e3eec3ba10796aa17935c0c917475f8992353ef2ba5c3f47bcb \<br />5f061f591b51bf541ab9d89652ec543ba253f2ed9c8521ac61f1208267c3bfb1<br /></pre><br />Next, generate a 2048-bit RSA key and sign the table using OpenSSL. You can use the command bellow or the <i><a href="https://github.com/nelenkov/verity/blob/master/sign.sh">sign.sh</a></i> script on Github.<br /><br /><pre>$ openssl dgst -sha1 -sign verity-key.pem -out table.sig table.bin<br /></pre><br />Once you have a signature you can generate the verity metadata block, which includes a magic number (<code>0xb001b001</code>) and the metadata format version, followed by the RSA PKCS#1.5 signature blob and table string, padded with zeros to 32k. You can generate the metadata block with the <i><a href="https://github.com/nelenkov/verity/blob/master/mkverity.py">mkverity.py</a></i> script by passing the signature and table files like this:<br /><br /><pre>$ ./mkverity.py table.sig table.bin verity.bin<br /></pre><br />Next, write the generated <i>verity.bin</i> file to the <i>system</i> partition using <i>dd</i>&nbsp; or a similar tool, right after the last filesystem block and before the start of the verity hash table. Using the same number of data blocks passed to <i>veritysetup</i>, the needed command (which also needs to be executed in recovery) becomes:<br /><br /><pre># dd if=verity.bin of=/dev/block/mmcblk0p21 bs=4096 seek=204800<br /></pre><br /></div><div>Finally, you can check that the partition is properly formatted using the <i>veritysetup verify</i> command as shown below, where the last parameter is the root hash:<br /><br /><pre># veritysetup --debug --hash-offset 838893568 --data-blocks 204800 verify \<br />/dev/block/mmcblk0p21 /dev/block/mmcblk0p21 \<br />5f061f591b51bf541ab9d89652ec543ba253f2ed9c8521ac61f1208267c3bfb1<br /></pre><br />If verification succeeds, reboot the device and verify that the device boots without errors. If it does, you can proceed to the next step: add the verification key to the boot image and enable automatic integrity verification.<br /><br />The RSA public key used for&nbsp;verification&nbsp;needs to be in mincrypt format (also used by the stock recovery when verifying OTA file signatures), which is a serialization of mincrypt's <code>RSAPublicKey</code> structure. The interesting thing about this structure is that ts doesn't simply include the modulus and public exponent values, but contains pre-computed values used by mincrypt's RSA implementation (based on <a href="https://en.wikipedia.org/wiki/Montgomery_reduction" target="_blank">Montgomery reduction</a>). Therefore converting an OpenSSL RSA public key to mincrypt format requires some modular operations and is not simply a binary format conversion. You can convert the PEM key using the <i><a href="https://github.com/nelenkov/verity/blob/master/pem2mincrypt.c">pem2mincrypt</a></i> tool (conversion code shamelessly stolen from secure <i>adb</i>'s&nbsp;<a href="https://android.googlesource.com/platform/system/core.git/+/android-4.4.2_r1/adb/adb_auth_host.c" target="_blank">implementation</a>). Once you have converted the key, include it in the root of your boot image under the <i>verity_key</i>&nbsp;filename. The last step is to modify the device's <i>fstab</i> file in order to enable block integrity verification for the <i>system</i> partition. This is simply a matter of adding the <i>verify</i> flag, as shown below:<br /><br /><pre>/dev/block/platform/msm_sdcc.1/by-name/system /system ext4 ro, barrier=1 wait,verify<br /></pre><br />Next, verify that your kernel configuration enable <code>CONFIG_DM_VERITY</code>, enable it if needed and build your boot image. Once you have <i>boot.img</i>, you can try booting the device with it using <i>fastboot boot boot.img</i> (without flashing it). If the hash table and verity metadata blcok have been generated and written correctly, the device should boot, and <i>/system</i> should be a mount of the automatically created device-mapper virtual device, as shown below. If the boot is successful, you can permanently flash the boot image to the device.<br /><br /><pre># mount|grep system<br />/dev/block/dm-0 /system ext4 ro,seclabel,relatime,data=ordered 0 0<br /></pre><br />Now any modifications to the <i>system</i> partition will result in read errors when reading the corresponding file(s). Unfortunately, system modifications by file-based OTA updates, which modify file blocks without updating verity metadata, will also invalidate the hash tree. As mentioned in the official documentation, in order to be compatible with dm-verity verified boot, OTA updates should also operate at the block level, ensuring that both file blocks and the hash tree and metadata are updated. This requires changing the current OTA update infrastructure, which is probably one of the reasons verified boot hasn't been deployed to production devices yet.<br /><h2>Summary</h2>Android includes a verified boot implementation based on the dm-verity device-mapper target since version 4.4. dm-verity is enabled by adding a hash table and a signed metadata block to the <i>system</i> partition and specifying the <i>verify</i> flag in the device's <i>fstab</i> file. At boot time Android verifies the metadata signature and uses the included device-mapper table to create and mount a virtual block device at <i>/system</i>. As a result, all reads from <i>/system</i> are verified against the dm-verity hash tree, and any modification to the system partition results in I/O errors.&nbsp;</div><div><br /></div>Nikolay Elenkovhttps://plus.google.com/117221066931981967754noreply@blogger.com118tag:blogger.com,1999:blog-2873091912851440312.post-84253611258424283222014-04-14T00:59:00.002+09:002014-10-24T11:59:46.462+09:00Android Security InternalsIf you have been following this blog for a while, you might have noticed that there haven't been many new posts in the past few months. There are two reasons for this: me being lazy and me working on a book. The books is progressing nicely, but is still a long way from being finished, so updates will probably continue to be spotty for a while.<br /><h2>What is this all about?</h2>The book is a continuation of my quest to understand how Android works and, as you may have guessed already, is called "Android Security Internals". That's a somewhat ambitious title, but it reflects my goal -- to present both an overview of Android's security architecture, and to show how its key components are implemented and interoperate. Meeting this goal requires starting with the most fundamental concepts such as Binder IPC, sandboxing, file ownership and permissions, and looking into key system services that bind the OS together, such as the <code>PackageManagerService</code> and <code>ActivityManagerService</code>. After (hopefully) explaining the fundamentals in sufficient detail, the book goes on to discuss higher level features such as credential storage, account management and device policy support. Security features added in recent versions, for example SELinux and verified boot are also introduced. While the book does cover topics traditionally associated with 'rooting' such as unlocking the bootloader, recovery images and superuser apps, this is not a main topic. Finding and developing exploits in order to gain root access is not discussed at all, so if you are interested in these topics you might want to pick up the recently released <a href="http://as.wiley.com/WileyCDA/WileyTitle/productCd-111860864X.html" target="_blank">Android Hacker's Handbook</a>, which covers them very well and in ample detail. Finally, almost all of the material is based on analysis of and experimentation with AOSP source code, and thus almost no vendor extensions or non-open source features are covered.<br /><h2>The book</h2><div>The book is being produced by <a href="http://www.nostarch.com/">No Starch Press</a>, who have a long history of publishing great technical books, and have lately been introducing some truly beautiful Lego books as well. On top of that, they are a real pleasure to work with, so do call them first if you ever consider writing a book.&nbsp;</div><div><br /></div><div>The book is scheduled for September 2014, hopefully I'll be able to finish it on time to meet that date. If that sounds like a long wait, there is good news: the book is <a href="http://www.nostarch.com/androidsecurity">available</a> via No Starch's <a href="http://www.nostarch.com/aboutearlyaccess">Early Access </a>program and you can read the first couple of chapters right now. New chapters will be made available once they are ready. While there is still a lot of work to be done, the book does already have a cover, and a great one at that:&nbsp;</div><div class="separator" style="clear: both; text-align: center;"></div><div><div class="separator" style="clear: both; text-align: center;"><a href="http://www.nostarch.com/androidsecurity" imageanchor="1" style="margin-left: 1em; margin-right: 1em;" target="_new"><img border="0" src="http://1.bp.blogspot.com/-QKXxv4Y032k/U0q066wgn-I/AAAAAAAAUQ8/G2pg8ILZzQQ/s1600/asi-cover.png" height="400" width="302" /></a></div><br />While I can't discuss progress in detail, the better part of the book is done and is in various stages of editing and review. Here is the current table of contents, subject to change, of course, but probably nothing too drastic.<br /><br />Update 2014/10/24: The book has now been <a href="http://www.nostarch.com/androidsecurity">released</a>.<br /><h2>Table of contents</h2>Chapter 1: Android's Security Model<br />Chapter 2: Permissions<br />Chapter 3: Package Management<br />Chapter 4: User Management<br />Chapter 5: Cryptographic Providers<br />Chapter 6: Network Security and PKI<br />Chapter 7: Credential Storage<br />Chapter 8: Online Account Management<br />Chapter 9: Enterprise Security<br />Chapter 10: Device Security<br />Chapter 11: NFC and Secure Elements<br />Chapter 12: SELinux<br />Chapter 13: Device Updates and Root Access<br /><br />If you have found this blog interesting or helpful at one time or another, hopefully this book is for you. While some of the material is based on previous blog posts, it has been largely re-written and extended, and most importantly professionally edited (thanks Bill!) and reviewed (thanks Kenny!), so it should be both much easier to read and more accurate. Most of the material is completely new and written exclusively for the book.<br /><br />That's it for now, major updates will be posted here, more minor ones via my <a href="https://plus.google.com/+NikolayElenkov/posts">Google+ account</a>. Finally, do follow No Starch Press on <a href="http://www.twitter.com/nostarch">Twitter</a>&nbsp;or subscribe to their <a href="http://www.nostarch.com/mailchimp/subscribe">newsletter</a>&nbsp;to get updates about upcoming books and Early Access releases.<br /><br /></div>Nikolay Elenkovhttps://plus.google.com/117221066931981967754noreply@blogger.com6tag:blogger.com,1999:blog-2873091912851440312.post-24417184318028881272014-03-03T01:40:00.000+09:002014-05-22T23:17:54.174+09:00Unlocking Android devices using an OTP via NFCOur <a href="http://nelenkov.blogspot.com/2013/10/signing-email-with-nfc-smart-card.html">last post</a> showed how to use a contactless smart card to sign email on Android. While storing cryptographic keys used with PKI or PGP is one of the main use cases for smart cards, other usages are gaining popularity as well. Additionally, the traditional 'card' format has evolved and there are different devices that embed a secure element (basically, the smart card chip), and make its functionality available without requiring a bulky card reader. One popular and affordable device that embeds a secure element is the <a href="http://www.yubico.com/products/yubikey-hardware/yubikey-neo/">YubKey Neo</a> from <a href="http://www.yubico.com/">Yubico</a>. In this post we'll show how you can use the YubiKey Neo to unlock your Android device over NFC.<br /><h2>One-time passwords</h2><div>Before we discuss how the YubiKey NEO can be used to unlock an Android device, let's say a few words about OTPs. As the name implies, one-time passwords are passwords that are valid for a single login or transaction. OTPs can be generated based on an algorithm that derives each next password from the previous one, or by using some sort of challenge-response mechanism. Another approach is to use a shared secret, called a <i>seed</i>, along with some dynamic value such as a counter or a value derived from the current time. While OTP generation based on a shared seed is usually fairly easy to implement, the dynamic values at the OTP token (called a <i>prover</i>) and the <i>verifier</i> (authentication server) can get out of sync and validation algorithms need to account for that.&nbsp;</div><div><br /></div><div>Many OTP schemes are proprietary and incompatible with each other. Fortunately, widely adopted open standards exist as well, most notably the&nbsp;HMAC-based One Time Password (<a href="https://www.ietf.org/rfc/rfc4226.txt">HOTP</a>) algorithm developed by the&nbsp;Initiative for Open Authentication (OATH). HOTP uses a secret key and a counter as input to the HMAC-SHA1 message authentication code (MAC) algorithm, truncates the calculated MAC value and converts it to a to human readable code, usually a 6-digit number. A later variation is the&nbsp;<a href="http://tools.ietf.org/html/rfc6238">TOTP</a> (Time-Based One-Time Password) algorithm, which substitutes the counter for a value derived from the current Unix time (i.e., the number of seconds since midnight of January 1, 1970 UTC). The derived value T, is calculated using an initial time T0 and a step X as follows: <code>T = (Current Unix time - T0) / X</code>. Each generated OTP is valid for X seconds, by default 30. TOTP is used by Google Authenticator and the Yubico OATH applet which we will use in our demo.<br /><h2>YubiKey Neo</h2><div>The original YubiKey (now called&nbsp;<a href="http://www.yubico.com/products/yubikey-hardware/yubikey/">YubiKey Standard</a>), was an innovative token for two-factor authentication (2FA). It has a USB interface and presents itself as a USB keyboard when pulgged in, and thus does not require any special drivers to use. It has a single capacitive button that outputs an OTP when pressed. Because the device functions as keyboard, the OTP can be automatically entered in any text field of a desktop or Web application, or even terminal window, requiring very little modification to exiting applications. The OTP is generated using a 128-bit key stored inside the device, either using Yubico's OTP algorithm, or the&nbsp;<span style="white-space: pre-wrap;">HOTP algorithm.</span></div><div><br /></div><div>The YubiKey Neo retains the form factor of the original YubiKey, but adds an important new component: a secure element (SE), accessible both via USB and over NFC. The SE offers a JavaCard 3.0/JCOP 2.4.2-compatible execution environment, an ISO14443A NFC interface, Mifare Classic emulation and an NDEF applet for interaction with Yubikey functionality. When plugged into a USB port, depending on its configuration, the Neo presents itself either as a keybord (HID device), a standard&nbsp;CCID smart card reader, or both when in composite mode. As the SE is fully compatible with JavaCard and GlobalPlatform standards, additional applets can be loaded with standard tools. Recent batches ship with pre-installed &nbsp;OATH, PGP and&nbsp;<a href="http://en.wikipedia.org/wiki/Personal_Identity_Verification">PIV</a>&nbsp;applets, and the code for both the OATH and PGP applets is&nbsp;<a href="http://opensource.yubico.com/">available</a>. Yubico provides a&nbsp;<a href="https://play.google.com/store/apps/details?id=com.google.android.apps.authenticator2">Google Authenticator</a>&nbsp;compatible Android application,&nbsp;<a href="https://play.google.com/store/apps/details?id=com.yubico.yubioath">Yubico Authenticator</a>&nbsp;that allows you to store the keys used to generate OTPs on the Neo. This ensures that neither attackers who have physical access to your Android device, nor applications with root access can extract your OTP keys.&nbsp;</div></div><h2><span style="font-size: 1em;">The Android lockscreen</span></h2><div>Before we can figure out how to unlock an Android device using an OTP we need to understand how the lockscreen works. The lockscreen is formally known as the <i>keyguard</i>&nbsp;and is implemented much like regular Android applications: with widgets laid out on a window. What makes it special is that its window lives on a very high window layer that other applications cannot draw on top of or get control over. Additionally, the keyguard intercepts the normal navigation buttons, making it impossible to bypass and thus 'locking' the device. The keyguard window layer is not the highest layer however: dialogs originating from the keyguard itself, and the status bar, can be drawn over the keyguard. You can see a list of the currently shown windows using the Hierarchy Viewer tool available with the ADT. When the screen is locked the active windows is the Keyguard window, as shown in the screenshot below.</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-pqbykYR2swc/UxH-4IyqAsI/AAAAAAAATkE/MgbJ3HKOMi0/s1600/window-layers.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-pqbykYR2swc/UxH-4IyqAsI/AAAAAAAATkE/MgbJ3HKOMi0/s1600/window-layers.png" height="377" width="640" /></a></div><div>Before Android 4.0, it was possible for third-party applications to show windows in the keyguard layer, and this approach was often used in order to intercept the Home button and implement 'kiosk' style applications. Since Android 4.0 however, adding windows to the keyguard layer requires the <code>INTERNAL_SYSTEM_WINDOW</code> signature permission, which is available only to system applications.<br /><br />For a long time the keyguard was an implementation detail of Android's window system and was not separated into a dedicated component. With the introduction of lockscreen widgets, dreams (i.e., screensavers) and support for multiple users, the keyguard gained quite a lot of functionality and was eventually extracted in a dedicated system application, Keyguard, in Android 4.4. The Keyguard app lives in the <i>com.android.systemui</i> process, along with the core Android UI implementation. Most importantly for our purposes, the Keyguard app includes a service with a remote interface, <code>IKeyguardService</code>. This service allows its clients to check the current state of the keyguard, set the current user, launch the camera and hide or disable the keyguard. As can be expected, operations that change the state of the keyguard are protected by a system signature permission, <code>CONTROL_KEYGUARD</code>.<br /><h2>Unlocking the keyguard</h2></div><div>Stock Android provides three main methods to unlock the keyguard: by drawing a pattern, by entering a PIN or password, or by using image recognition, aka Face Unlock, also referred to as 'weak biometric'. The pattern, PIN and passphrase methods are essentially equivalent: they compare the hash of the user input to a hash stored on the device and unlock it if the values match. The hash for the pattern lock is stored in <code>/data/system/gesture.key</code> as an unsalted SHA-1 value. The hash of the PIN/password is a combination of the SHA-1 and MD5 hash values of &nbsp;the user input, salted with a random value. It is stored in the <code>/data/misc/password.key</code> file. The Face Unlock implementation is proprietary and no details are available about the format of the stored data. Normally not visible to the user are the Google account password unlock method (used when the device is locked after too many incorrect unlock attempts) and the unlock method that uses the PIN or PUK of the SIM card. The Google unlock method uses the proprietary Google Login Service to verify the entered password, and the PIN/PUK method simply sends commands to the SIM card via the RIL interface.<br /><br />As you can see, all unlock methods are based on a fixed PIN, password or pattern. Except in the case of a long and complex password, which is rather hard to input on a touchscreen keyboard, all unlock secrets usually have low entropy and can easily be guessed or bruteforced. Android partially protects against such attacks by permanently locking the device after too many unsuccessful attempts. Additionally security polices introduced by a <a href="http://developer.android.com/guide/topics/admin/device-admin.html">device administrator</a> application can enforce PIN/password complexity rules and even wipe the device after too many unsuccessful attempts.<br /><br />One approach to improve the security of the keyguard is to use an OTP in order to unlock the device. While this is not directly supported by Android, it can be implemented on production devices by using a device administrator application that periodically changes the unlock PIN or password using the <a href="http://developer.android.com/reference/android/app/admin/DevicePolicyManager.html"><code>DevicePolicyManager</code></a> API. One such application is <a href="https://play.google.com/store/apps/details?id=com.cunninglogic.dynamicpin">TimePIN</a>&nbsp;(which this post was in part inspired by)&nbsp;which sets the unlock password based on the current time. TimePIN allows you to set different modifiers that are applied when calculating the current PIN. Modifiers can be stacked, so the transformation can become complex, but still easy to remember. A secret component, called an offset can be mixed in for added security.<br /><h2>Unlocking via NFC</h2></div><div>Authentication methods are usually based on something you know, something only you have, or a combination of the two (two-factor authentication, 2FA). The pattern and PIN/password unlock methods are based on something you know, and Face Unlock can be thought of as based on something you have (your face or a really good picture). However, Face Unlock allows for a fallback to PIN or password when it cannot detect a face, so it can still be unlocked by something you know.<br /><br />An alternative way to use something you have to unlock the device is to use an NFC tag. This is not supported by stock Android, but is implemented in some devices, for example the Motorola X (marketed as <a href="http://www.motorola.com/us/motorola-skip-moto-x/Motorola-Skip-for-Moto-X/motorola-skip-moto-x.html">Motorola Skip</a>). While the Motorola Skip is a proprietary solution and no implementation details are available, apps that offer similar functionality such as&nbsp;<a href="http://forum.xda-developers.com/showthread.php?t=2478163">NFC LockScreenOff Enabler</a>&nbsp;compare the UID of the read tag to a list of stored values and unlock the device if the UID is in the list. While this is fairly secure as the UID of most NFC tags is read-only, cards that allow for UID modification are available, and a programmable NFC card emulator can emit any UID.<br /><br />One problem with implementing NFC unlock is that by default Android does not scan for NFC devices when the screen is turned off or locked. This is intended as a security measure, because if the device reads NFC tags while the screen is off, vulnerabilities can be triggered without physical access to the device or the owner noticing, as has been demonstrated. NFC LockScreenOff Enabler and similar applications can get around this limitation when running on rooted devices by installing hooks into system methods, thus allowing the NFC system service configuration to be modified at runtime.<br /><h2>Unlocking using the YubiKey Neo</h2></div><div>As we mentioned in the 'YubiKey Neo' section, Yubico provides both a JavaCard <a href="http://opensource.yubico.com/ykneo-oath/">applet</a>&nbsp;and a companion <a href="http://opensource.yubico.com/yubioath-android/">Android app</a> that together implement TOTP compatible with Google Authenticator. The&nbsp;Yubico Authenticator app is initialized just like its Google counterpart -- either manually or by scanning a QR code. The difference is that the Yubico Authenticator saves the OTP seed on the device only temporarily, and once it's written to the Neo, deletes it. To display the current OTP, you need to touch the Neo while the app is active, and touch it again after the OTPs expire. If you don't want to enter keys and accounts manually you can use a QR code generator such as <a href="http://zxing.appspot.com/generator/">the one</a> provided by the ZXing project to generate a URI that includes an account name and seed. The URI format is available on the Google Authenticator <a href="https://code.google.com/p/google-authenticator/wiki/KeyUriFormat">Wiki</a>.<br /><br />While unlocking the keyguard certainly doesn't need the full functionality of the Google Authenticator app, displaying the current OTP is useful for debugging and initializing with a QR code is quite convenient. That's why for our demo we will simply modify the Authenticator app slightly, instead of writing another OTP source. As we need to provide the OTP to the system NFC service, which runs in a different process, we add a remote AIDL service with a single method that returns the current OTP:<br /><br /><pre>interface IRemoteOtpSource {<br /><br /> String getNextCode(String accountName);<br /><br />}<br /></pre><br />The NFC service can then bind to the OTP service that implements this interface and retrieve the current OTP. Of course, providing the OTP to everyone is not a great idea, so we protect the service with a signature permission that can only be granted to system apps by signing our &nbsp;RemoteAuthenticator app with the platform certificate:<br /><br /><pre>&lt;manifest ...&gt;<br />...<br /> &lt;permission <br /> android:name="com.google.android.apps.remoteauthenticator.GET_OTP_CODE" <br /> android:protectionlevel="signature"/&gt;<br />...<br /> &lt;application ...&gt;<br />...<br /> &lt;service android:enabled="true" android:exported="true" <br /> android:name="com.google.android.apps.authenticator.OtpService" <br /> android:permission="com.google.android.apps.remoteauthenticator.GET_OTP_CODE"&gt;<br /> &lt;/service&gt;<br /> &lt;/application&gt;<br /><br />&lt;/manifest&gt;<br /></pre><br />The full source code of the RemoteAuthenticator app is available on <a href="https://github.com/nelenkov/RemoteAuthenticator">Github</a>. Once installed, the app needs to be initialized with the same key and account name as the OATH applet on the YubiKey Neo. Our sample NFC unlock implementations looks for an account named 'lockscreen' when it detects the OATH applet. The interface of the modified app is identical to that of Google Authenticator:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-FYPE6qAYLj4/UxNYQspy0rI/AAAAAAAATkY/LDwsXxMaPxU/s1600/remote-authenticator.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-FYPE6qAYLj4/UxNYQspy0rI/AAAAAAAATkY/LDwsXxMaPxU/s1600/remote-authenticator.png" height="640" width="360" /></a></div><br /><br />Before we can use an NFC tag to unlock the keyguard, we need to make sure the system NFC service can detect NFC tags even when the keyguard is locked. As we mentioned earlier, that is not the case in stock Android, so we change the default polling mode from <code>SCREEN_STATE_ON_UNLOCKED</code> to <code>SCREEN_STATE_ON_LOCKED</code> in <code>NfcService.java</code>:<br /><br /><pre>package com.android.nfc;<br />...<br /><br />public class NfcService implements DeviceHostListener {<br />...<br /> /** minimum screen state that enables NFC polling (discovery) */<br /> static final int POLLING_MODE = SCREEN_STATE_ON_LOCKED;<br />...<br /><br />}<br /></pre><br />With this done, we can hook into the NFC service tag dispatch sequence, and, borrowing&nbsp;<a href="https://github.com/nelenkov/android_packages_apps_Nfc/blob/otp-unlock/src/com/yubico/yubioath/model/YubiKeyNeo.java">some code</a> from the Yubico Authenticator app, check whether the scanned tag includes an OATH applet. If so, we read out the current OTP and compare it with the OTP returned by the RemoteAuthenticator app installed on the device. If the OTPs match, we dismiss the keyguard and let the dispatch continue. If the tag doesn't contain an OTP applet, or the OTPs don't match, we do not dispatch the tag. To unlock the keyguard we simply call the <code>keyguardDone()</code> method of the system <code>KeyguardService</code>. The unlock process might look something like this:<br /><div class="separator" style="clear: both; text-align: center;"><br /><object width="320" height="266" class="BLOGGER-youtube-video" classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0" data-thumbnail-src="https://i1.ytimg.com/vi/GZhhNpgrvwg/0.jpg"><param name="movie" value="https://www.youtube.com/v/GZhhNpgrvwg?version=3&f=user_uploads&c=google-webdrive-0&app=youtube_gdata" /><param name="bgcolor" value="#FFFFFF" /><param name="allowFullScreen" value="true" /><embed width="320" height="266" src="https://www.youtube.com/v/GZhhNpgrvwg?version=3&f=user_uploads&c=google-webdrive-0&app=youtube_gdata" type="application/x-shockwave-flash" allowfullscreen="true"></embed></object></div><br />Full source code for the modified NFC service is available on <a href="https://github.com/nelenkov/android_packages_apps_Nfc/tree/otp-unlock">Github</a>&nbsp;(in the 'otp-unlock' branch). Note that while this demo implementation handles basic error cases like OATH applet not found or connection with tag lost, it is not particularly robust. It only tries to connect to remote services once, and if &nbsp;either of them is unavailable, NFC unlock is disabled altogether. It doesn't provide any visual indication that NFC unlock is happening either, the keyguard simply disappears as seen in the video above. Another missing piece is multi-user support: in order to support multiple users, the code needs to look for the current users's account on the NFC device, and not for a hardcoded name. Finally, the NFC unlock as currently implemented is not a full unlock method: it cannot be selected in the Screen security settings, but simply supplements the currently selected unlock method.<br /><h2>Summary</h2></div><div>As of Android 4.4, the Android keyguard can be queried by third party applications and dismissed by apps that hold the <code>CONTROL_KEYGUARD</code> permission. This makes it easy to implement alternative unlock mechanisms, such as NFC unlock. However, NFC tag polling is disabled by default when the screen is locked, so adding an NFC unlock mechanism requires modifying the system NFC service. For added security, NFC unlock methods should rely not only on the UID of the scanned tag, but on some secret information that is securely stored inside the tag. This could be a private key for use in some sort of signature-based authentication scheme, or an OTP seed. An easy way to implement OTP-based NFC unlock is to use the Yubico OATH applet, pre-installed on the YubiKey Neo, along with a modified Google Authenticator app that offers a remote interface to read the current OTP.&nbsp;</div>Nikolay Elenkovhttps://plus.google.com/117221066931981967754noreply@blogger.com0tag:blogger.com,1999:blog-2873091912851440312.post-75340990050508711652013-10-29T00:31:00.000+09:002014-03-14T23:33:55.878+09:00Signing email with an NFC smart card on AndroidLast time we <a href="http://nelenkov.blogspot.com/2013/09/using-sim-card-as-secure-element.html">discussed</a> how to access the SIM card and use it as a secure element to enhance Android applications. One of the main problems with this approach is that since SIM cards are controlled by the MNO any applets running on a commercial SIM have to be approved by them. Needless to say, that considerably limits flexibility. Fortunately, NFC-enabled Android devices can communicate with practically any external contactless smart card, and you can install anything on those. Let's explore how an NFC smart card can be used to sign email on Android.<br /><h2>NFC smart cards</h2><div>As discussed in <a href="http://nelenkov.blogspot.com/2012/08/accessing-embedded-secure-element-in.html">previous</a> <a href="http://nelenkov.blogspot.com/2012/08/android-secure-element-execution.html">posts</a>, a smart card is a secure execution environment on a single chip, typically packaged in a credit-card sized plastic package or the smaller&nbsp;2FF/3FF/4FF form factors when used as a SIM card. Traditionally, smart cards connect with a card reader using a number of gold-plated contact pads. The pads are used to both provide power to the card and establish serial communication with its I/O interface. Size, electrical characteristics and communication protocols are defined in the <a href="http://en.wikipedia.org/wiki/ISO/IEC_7816">7816</a> series of ISO standards. Those traditional cards are referred to as '<i>contact smart cards</i>'. <i>Contactless cards</i>&nbsp;on the other hand do not need to have physical contact with the reader. They draw power and communicate with the reader using RF induction. The communication protocol (T=CL) they use is defined in <a href="http://en.wikipedia.org/wiki/ISO/IEC_14443">ISO 14443</a>&nbsp;and is very similar to the T1 protocol used by contact cards. While smart cards that have only a contactless interface do exist, <i>dual-interface</i>&nbsp;cards that have both contacts and an antenna for RF communication are the majority. The underlying RF standard used varies by manufacturer, and both Type A and Type B are common.&nbsp;</div><div><br /></div><div style="text-align: justify;">As we&nbsp;<a href="http://nelenkov.blogspot.jp/2012/08/accessing-embedded-secure-element-in.html">know</a>, NFC has three standard modes of operation: reader/writer (R/W), peer-to-peer (P2P) and card emulation (CE) mode. All NFC-enabled Android devices support R/W and P2P mode, and some can provide CE, either using a physical secure element (SE) or <a href="http://nelenkov.blogspot.com/2012/10/emulating-pki-smart-card-with-cm91.html">software emulation</a>. All that is needed to communicate with a contactless smart card is the basic R/W mode, so they can be used on practically all Android devices with NFC support. This functionality is provided by the <a href="http://developer.android.com/reference/android/nfc/tech/IsoDep.html" style="font-size: 14px; line-height: 19px;"><code>IsoDep</code></a> class. It provides only basic command-response exchange functionality with the <code style="font-size: 14px; line-height: 19px;">transceive()</code> method, any higher level protocol need to be implemented by the client application.</div><h2>Securing email</h2><div>There have been quite a few new services that are trying to reinvent secure email in recent years. They are trying to make it 'easy' for users by taking care of key management and shifting all cryptographic operations to the server. As recent events have reconfirmed, introducing an intermediary is not a very good idea if communication between two parties is to be and remain secure. Secure email itself is hardly a new idea, and the 'old-school' way of implementing it relies on pubic key cryptography. Each party is responsible for both protecting their private key and verifying that the public key of their counterpart matches their actual identity. The method used to verify identity is the biggest difference between the two major secure email standards in use today, <a href="http://en.wikipedia.org/wiki/Pretty_Good_Privacy">PGP</a> and <a href="http://en.wikipedia.org/wiki/S/MIME">S/MIME</a>. PGP relies on the so called 'web of trust', where everyone can vouch for the identity of someone by signing their key (usually after meeting them in person), and keys with more signatures can be considered trustworthy. S/MIME, on the other hand, relies on PKI and X.509 certificates, where the issuing authority (CA) is relied upon to verify identity when issuing a certificate. PGP has the advantage of being decentralized, which makes it harder to break the system by compromising &nbsp;a single entity, as has happened with a number of public CAs in recent years. However, it requires much more user involvement and is especially challenging to new users. Additionally, while many commercial and open source PGP implementations do exist, most mainstream email clients do not support PGP out of the box and require the installation of plugins and additional software. On the other hand, all major proprietary (Outlook variants, Mail.app, etc) and open source (Thunderbird) email clients have built-in and mature S/MIME implementations. We will use S/MIME for this example because it is a lot easier to get started with and test, but the techniques described can be used to implement PGP-secured email as well. Let's first discuss how S/MIME is implemented.<br /><h2>Signing with S/MIME</h2>The S/MIME, or <i>Secure/Multipurpose Internet Mail Extensions</i>, <a href="http://tools.ietf.org/html/rfc5751">standard</a>&nbsp;defines how to include signed and/or encrypted content in email messages. It specified both the procedures for creating &nbsp;signed or encrypted (enveloped) content and the MIME media types&nbsp;to use when adding them to the message. For example, a signed message would have a part with the <code>Content-Type: application/pkcs7-signature; name=smime.p7s; smime-type=signed-data</code> which contains the message signature and any associated attributes. To an email client that does not support S/MIME, like most Web mail apps, this would look like an attachment called <code>smime.p7s</code>. S/MIME-compliant clients would instead parse and verify the signature and display some visual indication showing the signature verification status.<br /><br />The more interesting question however is what's in <code>smime.p7s</code>? The 'p7' stands for <a href="http://tools.ietf.org/html/rfc2315">PKCS#7</a>, which is the predecessor of the current <i>Cryptographic Message Syntax</i> (<a href="http://tools.ietf.org/html/rfc5652">CMS</a>). CMS defines structures used to package signed, authenticated or encrypted content and related attributes. As with most PKI X.509-derived standards, those structures are ASN.1 based and encoded into binary using <a href="http://en.wikipedia.org/wiki/Distinguished_encoding_rules#DER_encoding">DER</a>, just like certificates and CRLs. They are sequences of other structures, which are in turn composed of yet other ASN.1 structures, which are..., basically sequences all the way down. Let's try to look at the higher-level ones used for signed email. The CMS structure describing signed content is predictably called <code>SignedData</code> and looks like this:<br /><br /><pre>SignedData ::= SEQUENCE {<br /> version CMSVersion,<br /> digestAlgorithms DigestAlgorithmIdentifiers,<br /> encapContentInfo EncapsulatedContentInfo,<br /> certificates [0] IMPLICIT CertificateSet OPTIONAL,<br /> crls [1] IMPLICIT RevocationInfoChoices OPTIONAL,<br /> signerInfos SignerInfos }<br /></pre><br />Here <code>digestAlgorithms</code> contains the OIDs of the hash algorithms used to produce the signature (one for each signer) and <code>encapContentInfo</code> describes the data that was signed, and can optionally contain the actual data. The optional <code>certificates</code> and <code>crls</code> fields are intended to help verify the signer certificate. If absent, the verifier is responsible for collecting them by other means. The most interesting part, <code>signerInfos</code>, contains the actual signature and information about the signer. It looks like this:<br /><br /><pre>SignerInfo ::= SEQUENCE {<br /> version CMSVersion,<br /> sid SignerIdentifier,<br /> digestAlgorithm DigestAlgorithmIdentifier,<br /> signedAttrs [0] IMPLICIT SignedAttributes OPTIONAL,<br /> signatureAlgorithm SignatureAlgorithmIdentifier,<br /> signature SignatureValue,<br /> unsignedAttrs [1] IMPLICIT UnsignedAttributes OPTIONAL }<br /></pre><br />Besides the signature value and algorithms used, <code>SignedInfo</code> contains signer identifier used to find the exact certificate that was used and a number of optional signed and unsigned attributes. Signed attributes are included when producing the signature value and can contain additional information about the signature, such as signing time. Unsigned attribute are not covered by the signature value, but can contain signed data themselves, such as counter signature (an additional signature over the signature value).<br /><br />To sum this up, in order to produce a S/MIME signed message, we need to sign the email contents and any attributes, generate the <code>SignedInfo</code> structure, wrap it into a <code>SignedData</code>, DER encode the result and add it to the message using the appropriate MIME type. Sound easy, right? Let's how this can be done on Android. <br /><h2>Using S/MIME on Android</h2>On any platform, you need two things in order to generate an S/MIME message: a cryptographic provider that can perform the actual signing using an asymmetric key and an ASN.1 parser/generator in order to generate the <code>SignedData</code> structure. Android has JCE providers that support RSA, recently even with <a href="http://nelenkov.blogspot.com/2013/08/credential-storage-enhancements-android-43.html">hardware-backed</a> keys. What's left is an ASN.1 generator. While ASN.1 and DER/BER have been around for ages, and there are quite a few parsers/generators, the practically useful choices &nbsp;are not that many. No one really generates code directly from the ASN.1 modules found in related standards, most libraries implement only the necessary parts, building on available components. Both of Android's major cryptographic libraries, OpenSSL and Bouncy Castle contain ASN.1 parser/generators and have support for CMS. The related API's are not public though, so we need to include our own libraries.<br /><br />As usual we turn to <a href="http://nelenkov.blogspot.jp/2013/08/credential-storage-enhancements-android-43.html">Spongy Castle</a>, which is provides all of Bouncy Castle's functionality under a different namespace. In order to be able process CMS and generate S/MIME messages, we need the optional <code>scpkix</code> and <code>scmail</code> packages. The first one contains PKIX and CMS related classes, and the second one implements S/MIME. However, there is a <a href="https://github.com/rtyley/spongycastle/issues/7">twist</a>: Android lacks some of the classes required for generating S/MIME messages. As you may know, Android has implementations for most standard Java APIs, with a few exceptions, most notably the GUI widget related AWT and Swing packages. Those are rarely missed, because Android has its own widget and graphics libraries. However, besides widgets AWT contains classes related to MIME media types as well. Unfortunately, some of those are &nbsp;used in libraries that deal with MIME objects, such as <a href="http://www.oracle.com/technetwork/java/javamail/index.html">JavaMail</a> and the Bouncy Castle S/MIME implementation. JavaMail versions that include alternative AWT implementations, repackaged for Android have been <a href="https://code.google.com/p/javamail-android">available</a> for some time, but since they use some non-standard package names, they are not a drop-in replacement. That applies to Spongy Castle as well: some source code modifications are <a href="http://stackoverflow.com/questions/13357855/how-to-fix-error-of-spongy-castle-on-android-could-not-find-class-java-awt-data">required</a> in order to get <code>scmail</code> to work with the <code>javamail-android</code> library.<br /><br />With that sorted out, generating an S/MIME message on Android is just a matter of finding the signer key and certificate and using the proper Bouncy Castle and JavaMail APIs to generate and send the message:<br /><br /><pre>PrivateKey signerKey = KeyChain.getPrivateKey(ctx, "smime");<br />X509Certificate[] chain = KeyChain.getCertificateChain(ctx, "smime");<br />X509Certificate signerCert = chain[0];<br />X509Certificate caCert = chain[1];<br /><br />SMIMESignedGenerator gen = new SMIMESignedGenerator();<br />gen.addSignerInfoGenerator(new JcaSimpleSignerInfoGeneratorBuilder()<br /> .setProvider("AndroidOpenSSL")<br /> .setSignedAttributeGenerator(<br /> new AttributeTable(signedAttrs))<br /> .build("SHA512withRSA", signerKey, signerCert));<br />Store certs = new JcaCertStore(Arrays.asList(signerCert, caCert));<br />gen.addCertificates(certs);<br /><br />MimeMultipart mm = gen.generate(mimeMsg, "SC");<br />MimeMessage signedMessage = new MimeMessage(session);<br />Enumeration headers = mimeMsg.getAllHeaderLines();<br />while (headers.hasMoreElements()) {<br /> signedMessage.addHeaderLine((String) headers.nextElement());<br />}<br />signedMessage.setContent(mm);<br />signedMessage.saveChanges();<br /><br />Transport.send(signedMessage);<br /></pre><br />Here we first get the signer key and certificate using the <code>KeyChain</code> API and then create an S/MIME generator by specifying the key, certificate, signature algorithm and signed attributes. Note that we specify the <code>AndroidOpenSSL</code> provider explicitly which is the only one that can use hardware-backed keys. This is only required if you changed the default provider order when installing Spongy Castle, by default <code>AndroidOpenSSL</code> is the preferred JCE provider. We then add the certificates we want to include in the generated <code>SignedData</code> and generate a multi-part MIME message that includes both the original message (<code>mimeMsg</code>) and the signature. Finally we send the message using the JavaMail <code>Transport</code> class. The JavaMail Session initialization is omitted from the example above, see the <a href="https://github.com/nelenkov/nfc-smime">sample app</a> for how to set it up to use Gmail's SMTP server. This requires the Gmail account password to be specified, but with a little more work it can be replaced with an <a href="https://developers.google.com/gmail/oauth_overview">OAuth</a> token you can obtain from the system <code>AccountManager</code>.<br /><br />So what about smart cards?<br /><h2>Using a MuscleCard to sign email</h2></div><div>In order to sign email using keys stored on a smart card we need a few things:&nbsp;</div><div><ul><li>a dual-interface smart cards that supports RSA keys</li><li>a crypto applet that allows us to sign data with those keys</li><li>some sort of middleware that exposes card functionality through a standard crypto API</li></ul>Most recent dual-interface JavaCards fulfill our requirements, but we will be using a NXP J3A081 which supports JavaCard 2.2.2 and 2048-bit RSA keys. When it comes to open source crypto applets though, unfortunately the choices are quite limited. Just about the only one that is both full-featured and well supported in middleware libraries is the venerable <a href="http://www.linuxnet.com/musclecard/index.html">MuscleCard</a> applet. We will be using one of the fairly <a href="http://github.com/martinpaljak/MuscleApplet">recent forks</a>, updated to support JavaCard 2.2 and extended APDUs.&nbsp;To <a href="https://www.opensc-project.org/opensc/wiki/JavaCard">load</a> the applet on the card you need a GlobalPlatform-compatible loader application, like <a href="http://sourceforge.net/projects/gpj/">GPJ</a>, and of course the CardManager keys. Once you have <a href="https://www.opensc-project.org/opensc/wiki/MuscleApplet#AppletinitializationAPIversion1.3">initialized</a> it, you can <a href="https://github.com/OpenSC/OpenSC/wiki/Card-personalization">personalize</a> it by generating or importing keys and certificates. After that the card can be used in any application that supports PKCS#11, for example Thunderbird and Firefox. Because the card is dual-interface, practically any smart card reader can be used on desktops. When the OpenSC PKCS#11 module is loaded in Thunderbird the card will show up in the Security Devices dialog like this:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-2K7Mh-VWT0U/UmqKEOxUO3I/AAAAAAAAPvo/LC4LuWL_BTM/s1600/muscle-tb.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-2K7Mh-VWT0U/UmqKEOxUO3I/AAAAAAAAPvo/LC4LuWL_BTM/s400/muscle-tb.png" height="425" width="640" /></a></div><br />If the certificate installed in the card has your email in the <code>Subject Alternative Name</code> extension, you should be able send signed and encrypted emails (if you have the recipient's certificate, of course). But how to achieve the same thing in Android?<br /><h2>Using MuscleCard on Android</h2>Android doesn't support PKCS#11 modules, so in order to expose the cards crypto functionality we could implement a custom JCE provider that provides card-backed implementations of the <code>Signature</code> and <code>KeyStrore</code> engine classes. That is quite a bit of work though, and since we are only targeting the Bouncy Castle S/MIME API, we can get away by implementing the <code>ContentSigner</code> interface. It provides an <code>OutputStream</code> clients write data to be signed to, an <code>AlgorithmIdentifer</code> for the signature method used and a <code>getSignature()</code> method that returns the actual signature value. Our MuscleCard-backed implementation could look like this:<br /><br /><pre>class MuscleCardContentSigner implements ContentSigner {<br /><br /> private ByteArrayOutputStream baos = new ByteArrayOutputStream();<br /> private MuscleCard msc;<br /> private String pin;<br />...<br /> @Override<br /> public byte[] getSignature() {<br /> msc.select();<br /> msc.verifyPin(pin);<br /><br /> byte[] data = baos.toByteArray();<br /> baos.reset();<br /> return msc.sign(data);<br /> }<br />}<br /></pre><br />Here the <code>MuscleCard class</code> is our 'middleware' and encapsulates the card's RSA signature functionality. It is implemented by sending the required command APDUs for each operation using Android's IsoDep API and aggregating and converting the result as needed. For example, the <code>verifyPin()</code> is implemented like this: <br /><br /><pre>class MuscleCard {<br /><br /> private IsoDep tag;<br /><br /> public boolean verifyPin(String pin) throws IOException {<br /> String cmd = String.format("B0 42 01 00 %02x %s", pin.length(),<br /> toHex(pin.getBytes("ASCII")));<br /> ResponseApdu rapdu = new ResponseApdu(tag.transceive(fromHex(cmd)));<br /> if (rapdu.getSW() != SW_SUCCESS) {<br /> return false;<br /> }<br /><br /> return true;<br /> }<br />}<br /></pre><br />Signing is a little more complicated because it involves creating and updating temporary I/O objects, but follows the same principle. Since the applet does not support padding or hashing, we need to generate and pad the PKCS#1 (or PSS) signature block on Android and send the complete data to the card. Finally, we need to plug our signer implementation into the Bouncy Castle CMS generator:<br /><br /><pre>ContentSigner mscCs = new MuscleCardContentSigner(muscleCard, pin);<br />gen.addSignerInfoGenerator(new JcaSignerInfoGeneratorBuilder(<br /> new JcaDigestCalculatorProviderBuilder()<br /> .setProvider("SC")<br /> .build()).build(mscCs, cardCert));<br /></pre><br />After that the signed message can be generated exactly like when using local key store keys. Of course, there are a few caveats. Since apps cannot control when an NFC connection is established, we can only sign data after the card has been picked up by the device and we have received an <code>Intent</code> with a live <code>IsoDep</code> instance. Additionally, since signing can take a few seconds, we need to make sure the connection is not broken by placing the device on top of the card (or use some sort of awkward case with a card slot). Our implementation also takes a few shortcuts by hard-coding the certificate object ID and size, as well as the card PIN, but those can be remedied with a little more code. The UI of our homebrew S/MIME client is shown below.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-WnYuR5nshOU/UmqWho-sTiI/AAAAAAAAPv4/F9TopJT9SPg/s1600/nfc-smime-app.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-WnYuR5nshOU/UmqWho-sTiI/AAAAAAAAPv4/F9TopJT9SPg/s400/nfc-smime-app.png" height="640" width="384" /></a></div><br />After you import a PKCS#12 file in the system credential store you can sign emails using the imported keys. The 'Sign with NFC' button is only enabled when a compatible card has been detected. The easiest way to verify the email signature is to send a message to a desktop client that supports S/MIME. There are also a few Android email apps that support S/MIME, but setup can be a bit challenging because they often use their own trust and key stores. You can also dump the generated message to external storage using <code>MimeMessage.writeTo()</code> and then parse the CMS structure using the OpenSSL <code>cms</code> command:<br /><br /><pre>$ openssl cms -cmsout -in signed.message -noout -print<br />CMS_ContentInfo: <br /> contentType: pkcs7-signedData (1.2.840.113549.1.7.2)<br /> d.signedData: <br /> version: 1<br /> digestAlgorithms:<br /> algorithm: sha512 (2.16.840.1.101.3.4.2.3)<br /> parameter: NULL<br /> encapContentInfo: <br /> eContentType: pkcs7-data (1.2.840.113549.1.7.1)<br /> eContent: &lt;absent&gt;<br /> certificates:<br /> d.certificate: <br /> cert_info: <br /> version: 2<br /> serialNumber: 4<br /> signature: <br /> algorithm: sha1WithRSAEncryption (1.2.840.113549.1.1.5)<br /> ...<br /> crls:<br /> &lt;empty&gt;<br /> signerInfos:<br /> version: 1<br /> d.issuerAndSerialNumber: <br /> issuer: C=JP, ST=Tokyo, CN=keystore-test-CA<br /> serialNumber: 3<br /> digestAlgorithm: <br /> algorithm: sha512 (2.16.840.1.101.3.4.2.3)<br /> parameter: NULL<br /> signedAttrs:<br /> object: contentType (1.2.840.113549.1.9.3)<br /> value.set:<br /> OBJECT:pkcs7-data (1.2.840.113549.1.7.1)<br /><br /> object: signingTime (1.2.840.113549.1.9.5)<br /> value.set:<br /> UTCTIME:Oct 25 16:25:29 2013 GMT<br /><br /> object: messageDigest (1.2.840.113549.1.9.4)<br /> value.set:<br /> OCTET STRING:<br /> 0000 - 88 bd 87 84 15 53 3d d8-72 64 c7 36 f8 .....S=.rd.6.<br /> 000d - b0 f3 39 90 b2 a4 77 56-5c 9f e4 2e 7c ..9...wV\...|<br /> 001a - 7d 2e 0b 08 b4 b7 e7 6c-e9 b6 61 00 13 }......l..a..<br /> 0027 - 25 62 69 2a bc 08 5b 4c-4f c9 73 cf d3 %bi*..[LO.s..<br /> 0034 - c6 1e 51 c2 5f c1 64 77-3b 45 e2 cb ..Q._.dw;E..<br /> signatureAlgorithm: <br /> algorithm: rsaEncryption (1.2.840.113549.1.1.1)<br /> parameter: NULL<br /> signature: <br /> 0000 - a0 d0 ce 35 46 8c f9 cd-e5 db ed d8 e3 f0 08 ...5F..........<br /> ...<br /> unsignedAttrs:<br /> &lt;empty&gt;<br /></pre><br />Email encryption using the NFC smart card can be implemented in a similar fashion, but this time the card will be required when decrypting the message. <br /><h2>Summary</h2></div><div>Practically all NFC-enabled Android devices can be used to communicate with a contactless or dual-interface smart card. If the interface of card applications is known, it is fairly easy to implement an Android component that exposes card functionality via a custom interface, or even as a standard JCE provider. The card's cryptographic functionality can then be used to secure email or provide HTTPS and VPN authentication. This could be especially useful when dealing with keys that have been generated on the card and cannot be extracted. If a PKCS#12 backup file is available, importing the file in the system credential store can provide a better user experience and comparable security levels if the device has a hardware-backed credential store.&nbsp;</div>Nikolay Elenkovhttps://plus.google.com/117221066931981967754noreply@blogger.com6tag:blogger.com,1999:blog-2873091912851440312.post-50696139844689884632013-09-28T00:30:00.001+09:002013-09-30T14:37:22.039+09:00Using the SIM card as a secure element in AndroidOur <a href="http://nelenkov.blogspot.jp/2013/08/credential-storage-enhancements-android-43.html">last post</a>&nbsp;introduced one of Android 4.3's more notable security features -- improved credential storage, and while there are a few other <a href="https://source.android.com/devices/tech/security/enhancements43.html">enhancements</a> worth discussing, this post will slightly change direction. As mentioned <a href="http://nelenkov.blogspot.com/2012/08/accessing-embedded-secure-element-in.html">previously</a>, mobile devices can include some form of a Secure Element (SE), but a smart card based&nbsp;<a href="http://en.wikipedia.org/wiki/UICC">UICC</a> (usually called just 'SIM card') is almost universally present. Virtually all SIM cards in use today are programmable and thus can be used as a SE. Continuing the topic of hardware-backed security, we will now look into how SIMs can be programmed and used to enhance the security of Android applications.<br /><h2>SIM cards</h2><div>First, a few words about&nbsp;terminology:&nbsp;while the correct term for modern mobile devices is UICC (Universal Integrated Circuit Card), since the goal of this post is not to discuss the differences between mobile networks, we will usually call it a&nbsp;'SIM card' and only make the distinction when necessary.&nbsp;</div><div><br /></div><div>So what is a <a href="http://en.wikipedia.org/wiki/Subscriber_identity_module">SIM</a> card? 'SIM' stands for Subscriber Identity Module and refers to a smart card that securely stores the subscriber identifier and the associated key used to identify and authenticate to a mobile network. It was originally used on GSM networks and standards were later extended to support 3G and LTE. Since SIMs are smart cards, they conform to <a href="http://en.wikipedia.org/wiki/ISO/IEC_7816">ISO-7816</a> standards regarding physical characteristics and electrical interface. Originally they were the same size as 'regular' smart cards (Full-size, FF), but by far the most popular sizes nowadays are&nbsp;Mini-SIM (2FF) and&nbsp;Micro-SIM (3FF), with&nbsp;Nano-SIM (4FF) introduced in 2012.&nbsp;</div><div><br /></div><div>Of course, not every smart that fits in the SIM slot can be used in a mobile device, so the next question is: what makes a smart card a SIM card? Technically, it's conformance to mobile communication standards such&nbsp;<a href="http://www.3gpp.org/ftp/Specs/html-info/1111.htm">3GPP TS 11.11</a><span style="color: blue; font-family: Arial;"><span style="line-height: 18px;"><b>&nbsp;</b></span></span>and certification by the <a href="http://www.simalliance.org/">SIMalliance</a>.&nbsp;In practice it is the ability to run an application that allows it to communicate with the phone (referred to as 'Mobile Equipment', ME, or 'Mobile Station', MS in related standards) and connect to a mobile network. While the original GSM standard did not make a &nbsp;distinction between the physical smart card and the software required to connect to the mobile network, with the introduction of 3G standards, a clear distinction has been made. The physical smart card is referred to as Universal Integrated Circuit Card (UICC) and different mobile network applications than run on it have been defined: GSM, CSIM, USIM, ISIM, etc. A UICC can host and run more than one network application (hence 'universal'), and thus can be used to connect to different networks. While network application functionality depends on the specific mobile network, their core features are quite similar: store network parameters securely and identify to the network, as well as authenticate the user (optionally) and store user data.&nbsp;</div><h2>SIM card applications</h2><div>Let's take GSM/3G as an example and briefly review how a network application works. For GSM the main network parameters are network identity (International Mobile Subscriber Identity, IMSI; tied to the SIM), phone number &nbsp;(MSISDN, used for routing calls and changeable) and a shared network authentication key <code>Ki</code>. To connect to the network the MS needs to authenticate itself and negotiate a session key. Both authentication and session key derivation make use of <code>Ki</code>, which is also known to the network and looked up by IMSI. The MS sends a connection request and includes its IMSI, which the network uses to find the corresponding <code>Ki</code>. The network then uses the <code>Ki</code> to generate a challenge (<code>RAND</code>), expected challenge response (<code>SRES</code>) and session key <code>Kc</code> and sends <code>RAND</code> to the MS. Here's where the GSM application running on the SIM card comes into play: the MS passes the <code>RAND</code> to the SIM card, which in turn generates its own <code>SRES</code> and <code>Kc</code>. The <code>SRES</code> is sent to the network and if it matches the expected value, encrypted communication is established using the session key <code>Kc</code>. As you can see, the security of this protocol hinges solely on the secrecy of the <code>Ki</code>. Since all operations involving the <code>Ki</code> are implemented inside the SIM and it never comes with direct contact with neither the MS or the network, the scheme is kept reasonably secure. Of course, security depends on the encryption algorithms used as well, and major weaknesses that allow intercepted GSM calls to be <a href="https://srlabs.de/decrypting_gsm/">decrypted</a> using off-the shelf hardware were found in the original versions of the A3/A5 algorithms (which were initially secret). Jumping back to Android for a moment, all of this is implemented by the baseband software (more on this later) and network authentication is never directly visible to the main OS.<br /><br /></div><div>We've shown that SIM cards need to run applications, let's now say a few words about how those applications are implemented and installed. Initial smart cards were based on a file system model, where files (elementary files, EF) and directories (dedicated files, DF) were named with a two-byte identifier. Thus developing 'an application' consisted mostly of selecting an ID for the DF that hosts its files (called ADF), and specifying the formats and names of EFs that store data. For example, the GSM application is under the <code>'7F20'</code> ADF, and the USIM ADF hosts the&nbsp;<code>EF_imsi</code>, <code>EF_keys</code>, <code>EF_sms</code>, etc. files. Practically all SIMs used today are based on <a href="http://www.oracle.com/technetwork/java/javame/javacard/overview/getstarted/index.html">Java Card</a> technology and implement GlobalPlatform <a href="http://www.globalplatform.org/specificationscard.asp">card specifications</a>. Thus all network applications are implemented as Java Card applets and emulate the legacy file-based structure for backward compatibility. Applets are installed according to GlobalPlatform specifications by authenticating to the Issuer Security Domain (Card Manager) and issuing <code>LOAD</code> and <code>INSTALL</code> commands.</div><br /><div>One application management feature specific to SIM cards is support for OTA (Over-The-Air) updates via binary SMS. This functionality is not used by all carries, but it allows them to remotely install applets on SIM cards they have issued. OTA is implemented by wrapping card commands (APDUs) in SMS T-PDUs, which the ME forwards to the SIM (<a href="https://www.google.com/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=1&amp;cad=rja&amp;ved=0CCwQFjAA&amp;url=http%3A%2F%2Fwww.etsi.org%2Fdeliver%2Fetsi_ts%2F102200_102299%2F102226%2F09.02.00_60%2Fts_102226v090200p.pdf&amp;ei=G-NDUuflJMPukQWH-4BA&amp;usg=AFQjCNHOiHlvL5aHOlLidgq2il_yPyjQ-Q&amp;sig2=J1negcwe7FKavPZ7Y_cRVA&amp;bvm=bv.53217764,d.dGI">ETSI TS 102 226</a>). In most SIMs this is actually the only way to load applets on the card, even during initial personalization. That is why most of the common GlobalPlatform-compliant tools cannot be used as is for managing SIMs. One needs to either use a tool that supports SIM OTA, such as the <a href="http://www.simalliance.org/en/about/workgroups/interop_working_group/resources/simalliance-cat-loader-v20_gbtf7kyv.html">SIMalliance Loader</a>, or <a href="https://github.com/Shadytel/sim-tools">implement</a> APDU wrapping/unwrapping, including any necessary encryption and integrity algorithms (<a href="https://www.google.com/url?sa=t&amp;rct=j&amp;q=&amp;esrc=s&amp;source=web&amp;cd=1&amp;cad=rja&amp;ved=0CCoQFjAA&amp;url=http%3A%2F%2Fwww.etsi.org%2Fdeliver%2Fetsi_ts%2F102200_102299%2F102225%2F11.00.00_60%2Fts_102225v110000p.pdf&amp;ei=teVDUvPiC4bVkAX-gYH4DA&amp;usg=AFQjCNEkGJX_vWZ7oYY6AOHPNpbGNC6OaQ&amp;sig2=lG6GjVP-QuiDgcgMiDLg_A&amp;bvm=bv.53217764,d.dGI">ETSI TS 102 225</a>). Incidentally, problems with the implementation of those secured packets on some SIMs that use DES as the encryption and integrity algorithm have been used to <a href="https://srlabs.de/rooting-sim-cards/">crack</a> OTA update keys. The major use of the OTA functionality is to install and maintain <a href="http://en.wikipedia.org/wiki/SIM_Application_Toolkit">SIM Toolkit</a> (STK) applications which can interact with the handset via standard 'proactive' (in reality implemented via polling) commands and display menus or even open Web pages and send SMS. While STK applications are almost unheard of in the US and Asia, they are still heavily used in some parts of Europe and Africa for anything from mobile banking to citizen authentication. Android also supports STK with a dedicated STK system app, which is automatically disabled if the SIM card has not STK applets installed.</div><h2>Accessing the SIM card</h2><div>As mentioned above, network related functionality is implemented by the baseband software and what can be done from Android is entirely dependent on what features the baseband exposes. Android <a href="http://www.kandroid.org/online-pdk/guide/stk.html">supports STK</a> applications, so it does have internal support for communicating to the SIM, but the OS <a href="http://source.android.com/devices/tech/security/index.html">security overview</a> explicitly states that '<i>low level access to the SIM card is not available to third-party apps</i>'. So how can we use it as an SE then? Some Android builds from major vendors, most notably Samsung, provide an implementation of the <a href="http://www.simalliance.org/en/about/workgroups/open_mobile_api_working_group/">SIMalliance Open Mobile API</a> on some handsets and an open source implementation (for compatible devices) is available from the <a href="https://code.google.com/p/seek-for-android/">SEEK for Android</a> project. The Open Mobile API aims to provide a unified interface for accessing SEs on Android, including the SIM. To understand how the Open Mobile API works and the cause of its limitations, let's first review how access to the SIM card is implemented in Android. <br /><br />On Android devices all mobile network functionality (dialing, sending SMS, etc.) is provided by the baseband processor (also referred to as 'modem' or 'radio'). Android applications and system services communicate to the baseband only indirectly via the <a href="http://www.kandroid.org/online-pdk/guide/telephony.html">Radio Interface Layer</a> (RIL) daemon (<code>rild</code>). It in turn talks to the actual hardware by using a manufacturer-provided RIL HAL library, which wraps the proprietary interface the baseband provides. The SIM card is typically connected only to baseband processor (sometimes also to the NFC controller via <a href="http://en.wikipedia.org/wiki/Single_Wire_Protocol">SWP</a>), and thus all communication needs to go through the RIL. While the proprietary RIL implementation can always access the SIM in order to perform network identification and authentication, as well as read/write contacts and access STK applications, support for transparent APDU exchange is not always available. The standard way to provide this feature is to use extended AT commands such <code>AT+CSIM</code> (Generic SIM access) and <code>AT+CGLA</code> (Generic UICC Logical Channel Access), as defined in <a href="http://www.3gpp.org/ftp/Specs/html-info/27007.htm">3GPP TS 27.007</a>, but some vendors implement it using <a href="http://usmile.at/blog/seek-galaxys3s">proprietary extensions</a>, so support for the necessary AT commands does not automatically provide SIM access.</div><br /><div>SEEK for Android provides patches that implement a resource manager service (<code>SmartCardService</code>) that can connect to any supported SE (embedded SE, <a href="https://www.sdcard.org/developers/overview/ASSD/">ASSD</a> or UICC) and extensions to the Android telephony framework that allow for transparent APDU exchange with the SIM. As mentioned above, access through the RIL is hardware and proprietary RIL library dependent, so you need both a compatible device and a build that includes the <code>SmartCardService</code> and related framework extensions. Thanks to some work by they <a href="http://usmile.at/">u'smile</a> project, UICC access on most variants of the popular Galaxy S2 and S3 handsets is <a href="http://usmile.at/blog/cyanogenmod-seek-uicc-s2-s3">available</a> using a patched CyanogenMod build, so you can make use of the latest SEEK version. Even if you don't own one of those devices, you can use the SEEK <a href="https://code.google.com/p/seek-for-android/wiki/EmulatorExtension">emulator extension</a> which lets you use a standard PC/SC smart card reader to connect a SIM to the Android emulator. Note that just any regular Java card won't work out of the box because the emulator will look for the GSM application and mark the card as not usable if it doesn't find one. You can modify it to skip those steps, but a simple solution is to install a dummy GSM application that always returns the expected responses.</div><div><br /></div><div>Once you have managed to get a device or the emulator to talk to the SIM, using the OpenMobile API to send commands is quite straightforward:<br /><br /><pre>// connect to the SE service, asynchronous<br />SEService seService = new SEService(this, this);<br />// list readers <br />Reader[] readers = seService.getReaders();<br />// assume the first one is SIM and open session<br />Session session = readers[0].openSession();<br />// open logical (or basic) channel<br />Channel channel = session.openLogicalChannel(aid);<br />// send APDU and get response<br />byte[] rapdu = channel.transmit(cmd);<br /></pre><br />You will need to request the <code>org.simalliance.openmobileapi.SMARTCARD</code> permission and add the <code>org.simalliance.openmobileapi</code> extension library to your manifest for this to work. See the <a href="https://code.google.com/p/seek-for-android/wiki/UsingSmartCardAPI">official wiki</a> for more details. <br /><br /><pre>&lt;manifest ...&gt;<br /><br /> &lt;uses-permission android:name="org.simalliance.openmobileapi.SMARTCARD" /&gt;<br /><br /> &lt;application ...&gt;<br /> &lt;uses-library<br /> android:name="org.simalliance.openmobileapi"<br /> android:required="true" /&gt;<br /> ...<br /> &lt;/application&gt;<br />&lt;/manifest&gt;<br /></pre><h2>SE-enabled Android applications</h2></div>Now that we can connect to the SIM card from applications, what can we use it for? Just as regular smart cards, an SE can be used to store data and keys securely and perform cryptographic operations without keys having to leave the card. One of the usual applications of smart cards is to store RSA authentication keys and certificates that are used from anything from desktop logon to VPN or SSL authentication. This is typically implemented by providing some sort of middleware library, usually a standard cryptographic service provider (CSP) module that can plug into the system CSP or be loaded by a compatible application. As the Android security model does not allow system extensions provided by third party apps, in order to integrate with the system key management service, such middleware would need to be implemented as a <a href="http://nelenkov.blogspot.jp/2012/07/jelly-bean-hardware-backed-credential.html">keymaster</a> module for the system credential store (<code>keystore</code>) and be bundled as a system library. This can be accomplished by building a custom ROM which installs our custom <code>keymaster</code> module, but we can also take advantage of the SE without rebuilding the whole system. The most straightforward way to do this is to implement the security critical part of an app inside the SE and have the app act as a client that only provides a user-facing GUI. One such application provided with the SEEK distribution is an SE-backed one-time password (OTP) <a href="https://code.google.com/p/seek-for-android/wiki/GoogleOtpAuthenticator">Google Authenticator</a> app. Since the critical part of OTP generators is the seed (usually a symmetric cryptographic key), they can easily be cloned once the seed is obtained or extracted. Thus OTP apps that store the seed in a regular file (like the official <a href="https://play.google.com/store/apps/details?id=com.google.android.apps.authenticator2">Google Authenticator</a> app) provide little protection if the device OS is compromised. The SEEK GoogleOtpAuthenticator app both stores the seed and performs OTP generation inside the SE, making it impossible to recover the seed from the app data stored on the device.<br /><br />Another type of popular application that could benefit from using an SE is a password manager. Password managers typically use a user-supplied passphrase to derive a symmetric key, which is in turn used to encrypt stored passwords. This makes it hard to recover stored passwords without knowing the passphrase, but naturally security level is totally dependent on its complexity. As usual, because typing a long string with rarely used characters on a mobile device is not a particularly pleasant experience, users tend to pick easier to type, low-entropy passphrases. If the key is stored in an SE, the passphrase can be skipped or replaced with a simpler PIN, making the password manager app both more user-friendly and secure. Let's see how such an SE-backed password manager can be implemented using a Java Card applet and the Open Mobile API.<br /><div><h2>DIY SIM password manager</h2>Ideally, all key management and encryption logic should be implemented inside the SE and the client application would only provide input (plain text passwords) and retrieve opaque encrypted data. The SE applet should not only provide encryption, but also guarantee the integrity of encrypted data either by using an algorithm that provides authenticated encryption (which most smart card don't natively support currently) or by calculating a <a href="http://en.wikipedia.org/wiki/Message_Authentication_Code">MAC</a> over the encrypted data using <a href="http://en.wikipedia.org/wiki/HMAC">HMAC</a> or some similar mechanism. Smart cards typically provide some sort of encryption support, starting with DES/3DES for low-end models and going up to RSA and EC for top-of-the-line ones. Since public key cryptography is typically not needed for mobile network authentication or secure OTA (which is based on symmetric algorithms), SIM cards rarely support RSA or EC. A reasonably secure symmetric and hash algorithm should be enough to implement a simple password manager though, so in theory we should be able to use even a lower-end SIM.<br /><br />As mentioned in the previous section, all recent SIM cards are based on Java Card technology, and it is possible to develop and load a custom applet, provided one has access to the Card Manager or OTA keys. Those are naturally not available for commercial MNO SIMs, so we would need to use a blank 'programmable' SIM that allows for loading applets without authentication or comes bundled with the required keys. Those are quite hard, but <a href="http://shop.shadytel.com/">not impossible</a> to come by, so let's see how such a password manager applet could be implemented. We won't discuss the basics of Java Card programming, but jump straight to the implementation. Refer to the offical <a href="http://www.oracle.com/technetwork/java/javame/javacard/download/platformspec/index.html">documentation</a>, or a <a href="http://www.oracle.com/technetwork/java/javacard/intro-139322.html">tutorial</a> if you need an introduction.<br /><br />The Java Card API provides a subset of the <a href="http://docs.oracle.com/javase/6/docs/technotes/guides/security/crypto/CryptoSpec.html">JCA</a> classes, with an interface optimized towards using pre-allocated, shared byte arrays, which is typical on a memory constrained platform such as a smart card. A basic encryption example would look something like this:<br /><br /><pre>byte[] buff = apdu.getBuffer();<br />//..<br />DESKey deskey = (DESKey)KeyBuilder.buildKey(KeyBuilder.TYPE_DES_TRANSIENT_DESELECT, <br /> KeyBuilder.LENGTH_DES3_2KEY, false);<br />deskey.setKey(keyBytes, (short)0);<br />Cipher cipher = Cipher.getInstance(Cipher.ALG_DES_CBC_PKCS5, false);<br />cipher.init(deskey, Cipher.MODE_ENCRYPT);<br />cipher.doFinal(data, (short) 0, (short) data.length,<br /> buff, (short) 0);<br /></pre><br />As you can see, a dedicated key object, that is automatically cleared when the applet is deselected, is first created and then used to initialize a <code>Cipher</code> instance. Besides the unwieldy number of casts to <code>short</code> (necessary because 'classic' Java Card does not support <code>int</code>, but it is still the default integer type) the code is very similar to what you would find in a Java SE or Android application. Hashing uses the <code>MessageDigest</code> class and follows a similar routine. Using the system-provided <code>Cipher</code> and <code>MessageDigest</code> classes as building blocks it is fairly straightforward to implement CBC mode encryption and HMAC for data integrity. However as it happens, our low end SIM card does not provide usable implementations of those classes (even though the spec sheet claims they do), so we would need to start from scratch. Fortunately, since Java cards can execute arbitrary programs (as long as they fit in memory), it is also possible to include our own encryption algorithm implementation in the applet. Even better, a Java Card optimized AES implementation is <a href="http://www.fi.muni.cz/~xsvenda/jcalgs.html#aes">freely available</a>. This implementation provides only the basic pieces of AES -- key schedule generation and single block encryption, so some additional work is required to match the Java <code>Cipher</code> class functionality. The bigger downside is that by using an algorithm implemented in software we cannot take advantage of the specialized crypto co-processor most smart cards have. With this implementation our SIM (8-bit CPU, 6KB RAM) card takes about 2 seconds to process a single AES block with a 128-bit key. The performance can be improved slightly by reducing the number of AES round to 7 (10 are recommended for 128-bit keys), but that will both lower the security level of the system and result in an non-standard cipher, making testing more difficult. Another disadvantage is that native key objects are usually stored in a secured memory area that is better protected from side channel attacks, but by using our own cipher we are forced to store keys in regular byte arrays. With those caveats, this AES implementation should give us what we need for our demo application. Using the <code>JavaCardAES</code> class as a building block, our AES CBC encryption routine would look something like this:<br /><br /><pre>aesCipher.RoundKeysSchedule(keyBytes, (short) 0, roundKeysBuff);<br />short padSize = addPadding(cipherBuff, offset, len);<br />short paddedLen = (short) (len + padSize);<br />short blocks = (short) (paddedLen / AES_BLOCK_LEN);<br /><br />for (short i = 0; i &lt; blocks; i++) {<br /> short cipherOffset = (short) (i * AES_BLOCK_LEN);<br /> for (short j = 0; j &lt; AES_BLOCK_LEN; j++) {<br /> cbcV[j] ^= cipherBuff[(short) (cipherOffset + j)];<br /> }<br /> aesCipher.AESEncryptBlock(cbcV, OFFSET_ZERO, roundKeysBuff);<br /> Util.arrayCopyNonAtomic(cbcV, OFFSET_ZERO, cipherBuff,<br /> cipherOffset, AES_BLOCK_LEN);<br />}<br /></pre><br />Not as concise as using the system crypto classes, but gets the job done. Finally (not shown), the IV and cipher text are copied to the APDU buffer and sent back to the caller. Decryption follows a similar pattern. One thing that is obviously missing is the MAC, but as it turns out a hash algorithm implemented in software is prohibitively slow on our SIM (mostly because it needs to access large tables stored in the slow card EEPROM). While a MAC can be also implemented using the AES primitive, we have omitted it from the sample applet. In practice tampering with the cipher text of encrypted passwords would only result in incorrect passwords, but it is still a good idea to use a MAC when implementing this on a fully functional Java Card.<br /><br />Our applet can now perform encryption and decryption, but one critical piece is still missing -- a random number generator. The Java Card API has the <code>RandomData</code> class which is typically used to generate key material and IVs for cryptographic operations, but just as with the <code>Cipher</code> class it is not available on our SIM. Therefore, unfortunately, we need to apply the DIY approach again. To keep things simple and with a (somewhat) reasonable response time, we implement a simple pseudo random number generator (PRNG) based on AES in counter mode. As mentioned above, the largest integer type in classic Java Card is <code>short</code>, so the counter will wrap as soon as it goes over 32767. While this can be overcome fairly easily by using a persistent byte array to simulate a <code>long</code> (or <code>BigInteger</code> if you are more ambitious), the bigger problem is that there is no suitable source of entropy on the smart card that we can use to seed the PRNG. Therefore the PRNG AES key and nonce need to be specified at applet install time and be unique to each SIM. Our simplistic PRNG implementation based on the <code>JavaCardAES</code> class is shown below (<code>buff</code> is the output buffer):<br /><br /><pre>Util.arrayCopyNonAtomic(prngNonce, OFFSET_ZERO, cipherBuff,<br /> OFFSET_ZERO, (short) prngNonce.length);<br />Util.setShort(cipherBuff, (short) (AES_BLOCK_LEN - 2), prngCounter);<br /><br />aesCipher.RoundKeysSchedule(prngKey, (short) 0, roundKeysBuff);<br />aeCipher.AESEncryptBlock(cipherBuff, OFFSET_ZERO, roundKeysBuff);<br />prngCounter++;<br /><br />Util.arrayCopyNonAtomic(cipherBuff, OFFSET_ZERO, buff, offset, len);<br /></pre><br />The recent <a href="http://thegenesisblock.com/security-vulnerability-in-all-android-bitcoin-wallets/">Bitcoin app problems</a> traced to a repeatable PRNG in Android, controversy around the Dual_EC_DRBG PRNG algorithm, which is both believed to be <a href="http://blog.cryptographyengineering.com/2013/09/the-many-flaws-of-dualecdrbg.html">weak by design</a> and is <a href="http://blog.cryptographyengineering.com/2013/09/rsa-warns-developers-against-its-own.html">used by default</a> in popular crypto toolkits and finally the <a href="http://smartfacts.cr.yp.to/analysis.html">low-quality</a> hardware RNG found in FIPS certified smart cards have highlighted the critical impact a flawed PRNG can have on any system that uses cryptography. That is why a DIY PRNG is definitely not something you would like to use in a production system. Do find a SIM that provides working crypto classes and do use <code>RandomData.ALG_SECURE_RANDOM</code> to initialize the PRNG (that won't help much if the card's hardware RNG is flawed, of course). <br /><br />With that we have all the pieces needed to implement the password manager applet, and what is left is to define and expose a public interface. For Java Card this means defining the values of the <code>CLA</code> and <code>INS</code> bytes the applet can process. Besides the obviously required encrypt and decrypt commands, we also provide commands to get the current state, initialize and clear the applet.<br /><br /><pre>static final byte CLA = (byte) 0x80;<br />static final byte INS_GET_STATUS = (byte) 0x1;<br />static final byte INS_GEN_RANDOM = (byte) 0x2;<br />static final byte INS_GEN_KEY = (byte) 0x03;<br />static final byte INS_ENCRYPT = (byte) 0x4;<br />static final byte INS_DECRYPT = (byte) 0x5;<br />static final byte INS_CLEAR = (byte) 0x6;<br /></pre><br />Once we have a working applet, implementing the Android client is fairly straightforward. We need to connect to the <code>SEService</code>, open a logical channel to our applet (AID: <code>73 69 6d 70 61 73 73 6d 61 6e 01</code>) and send the appropriate APDUs using the protocol outlined above. For example, sending a string to be encrypted requires the following code (assuming we already have an open <code>Session</code> to the SE). Here <code>0x9000</code> is the standard ISO 7816-3/4 success status word (SW):<br /><br /><pre>Channel channel = session.openLogicalChannel(fromHex("73 69 6d 70 61 73 73 6d 61 6e 01"));<br />byte[] data = "password".getBytes("ASCII");<br />String cmdStr = "80 04 00 00 " + String.format("%02x", data.length)<br /> + toHex(data) + "00";<br />byte[] rapdu = channel.transmit(fromHex(cmdStr));<br />short sw = (short) ((rapdu [rapdu.length - 2] &lt;&lt; 8) | (0xff &amp; rapdu [rapdu.length - 1]));<br />if (sw != (short)0x9000) {<br /> // handle error<br />}<br />byte[] ciphertext = Arrays.copyOf(rapdu, rapdu.length - 2);<br />String encrypted= Base64.encodeToString(ciphertext, Base64.NO_WRAP);<br /></pre><br />Besides calling applet operations by sending commands to the SE, the sample Android app also has a simple database to store encrypted passwords paired with a description, and displays currently managed passwords in a list view. Long pressing on the password name will bring up a contextual action that allows you to decrypt and temporarily display the password so you can copy it and paste it into the target application. The current implementation does not require a PIN to decrypt passwords, but one can easily by provided using Java Card's <code>OwnerPIN</code> class, optionally disabling the applet once a number of incorrect tries is reached. While this app can hardly compete with popular password managers, it has enough functionality to both illustrate the concept of an SE-backed app and be practially useful. Passwords can be added by pressing the '+' action item and the delete item clears the encryption key and PRNG counter, but not the PRNG seed and nonce. A screenshot of the award-winning UI is shown below. Full source code for both the applet and the Android app is available on <a href="https://github.com/nelenkov/sim-password-manager">Github</a>.<br /><br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-4ZqAPtWWijo/UkWOar7nUmI/AAAAAAAAPdI/Os8mj_RFkNM/s1600/decrypted-password.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="640" src="http://3.bp.blogspot.com/-4ZqAPtWWijo/UkWOar7nUmI/AAAAAAAAPdI/Os8mj_RFkNM/s640/decrypted-password.png" width="384" /></a></div><br /></div><h2>Summary</h2>The AOSP version of Android does not provide a standard API to use the SIM card as a SE, but many vendors do, and as long as the device baseband and RIL support APDU exchange, one can be added by using the SEEK for Android patches. This allows to improve the security of Android apps by using the SIM as a secure element and both store sensitive data and implement critical functionality inside it. Commercial SIM do not allow for installing arbitrary user applications, but applets can be automatically loaded by the carrier using the SIM OTA mechanism and apps that take advantage of those applets can be distributed through regular channels, such as the Play Store.<br /><br />Thanks to <a href="http://www.mroland.at/">Michael</a> for developing the Galaxy S2/3 RIL patch and helping with getting it to work on my somewhat exotic S2.Nikolay Elenkovhttps://plus.google.com/117221066931981967754noreply@blogger.com15tag:blogger.com,1999:blog-2873091912851440312.post-88643345576648672372013-08-21T02:06:00.000+09:002014-03-14T23:34:27.210+09:00Credential storage enhancements in Android 4.3Our <a href="http://nelenkov.blogspot.jp/2013/07/building-wireless-android-device.html">previous post</a>&nbsp;was not related to Android security, but happened to coincide with the&nbsp;<a href="http://developer.android.com/about/versions/jelly-bean.html">Android 4.3</a>&nbsp;announcement. Now that the post-release dust has settled, time to give it a proper welcome here as well. Being a minor update, there is nothing ground-breaking, but this '<a href="http://developer.android.com/reference/android/os/Build.VERSION_CODES.html#JELLY_BEAN_MR2">revenge of the beans'</a> brings some welcome enhancements and new APIs. <a href="https://source.android.com/devices/tech/security/enhancements43.html">Enough</a>&nbsp;of those are related to security for some to even call 4.3 a 'security release'. Of course, the big star is <a href="http://selinuxproject.org/page/SEAndroid">SELinux</a>, but credential storage, which has been a <a href="http://nelenkov.blogspot.jp/2011/11/using-ics-keychain-api.html">somewhat</a> <a href="http://nelenkov.blogspot.jp/2011/11/ics-credential-storage-implementation.html">recurring</a> <a href="http://nelenkov.blogspot.jp/2012/07/jelly-bean-hardware-backed-credential.html">topic</a> on this blog, got a significant facelift too, so we'll look into it first. This post will focus mainly on the newly introduced features and interfaces, so you might want to review previous credential storage posts before continuing.<br /><h3>What's new in 4.3</h3><div>First and foremost, the system credential store, now officially named&nbsp;'Android Key Store' has a public <a href="http://developer.android.com/reference/android/security/KeyPairGeneratorSpec.html">API</a> for storing and using&nbsp;app-private keys. This was <a href="http://nelenkov.blogspot.jp/2012/05/storing-application-secrets-in-androids.html">possible</a> before too, but not officially supported and somewhat clunky on pre-ICS devices. Next, while only the primary (owner) user could use the system key store pre-4.3, now it is multi-user compatible and each user gets their own keys. Finally, there is an <a href="http://developer.android.com/reference/android/security/KeyChain.html#isBoundKeyAlgorithm(java.lang.String)">API</a> and even a system settings field that lets you check whether the credential store is hardware-backed (Nexus 4, Nexus 7) or software only (Galaxy Nexus). While the core functionality hasn't changed much since the previous release, the implementation strategy has evolved quite a bit, so we will look briefly into that too. That's a lot to cover, so lets' get started.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-IZ8SqUUB5Fo/Uh3hI6TV9II/AAAAAAAAPTw/_UTJKJrdA7E/s1600/hw-backed.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-IZ8SqUUB5Fo/Uh3hI6TV9II/AAAAAAAAPTw/_UTJKJrdA7E/s400/hw-backed.png" height="640" width="384" /></a></div><h3>Public API</h3></div><div>The API is outlined in the 'Security' section of the 4.3 new <a href="http://developer.android.com/about/versions/android-4.3.html#Security">API introduction page</a>, and details can be found in the official <a href="http://developer.android.com/reference/android/security/package-summary.html">SDK reference</a>, so we will only review it briefly. Instead of introducing yet another Android-specific API, key store access is exposed via standard JCE APIs, namely <a href="http://developer.android.com/reference/javax/crypto/KeyGenerator.html"><code>KeyGenerator</code></a> and <a href="http://developer.android.com/reference/java/security/KeyStore.html"><code>KeyStore</code></a>. Both are backed by a new Android JCE provider, <a href="https://android.googlesource.com/platform/frameworks/base.git/+/android-4.3_r2.1/keystore/java/android/security/AndroidKeyStoreProvider.java"><code>AndroidKeyStoreProvider</code></a> and are accessed by passing <code>"AndroidKeyStore"</code> as the <code>type</code> parameter of the respective factory methods (those APIs were actually available in 4.2 as well, but were not public). For a full sample detailing their usage, refer to the <code>BasicAndroidKeyStore</code>&nbsp;project in the Android SDK. To introduce their usage briefly, first you create a&nbsp;<a href="http://developer.android.com/reference/android/security/KeyPairGeneratorSpec.html"><code>KeyPairGeneratorSpec</code></a> that describes the keys you want to generate (including a self-signed certificate), initialize a&nbsp;<a href="http://developer.android.com/reference/java/security/KeyPairGenerator.html"><code>KeyPairGenerator</code></a> with it and then generate the keys by calling <a href="http://developer.android.com/reference/java/security/KeyPairGenerator.html#generateKeyPair()"><code>generateKeyPair()</code></a>. The most important parameter is the alias, which you then pass to <a href="http://developer.android.com/reference/java/security/KeyStore.html#getEntry(java.lang.String, java.security.KeyStore.ProtectionParameter)"><code>KeyStore.getEntry()</code></a> in order to get a handle to the generated keys later. There is currently no way to specify key size or type and generated keys default to 2048 bit RSA. Here's how all this looks like:<br /><br /><pre>// generate a key pair<br />Context ctx = getContext();<br />Calendar notBefore = Calendar.getInstance()<br />Calendar notAfter = Calendar.getInstance();<br />notAfter.add(1, Calendar.YEAR);<br />KeyPairGeneratorSpec spec = new KeyPairGeneratorSpec.Builder(ctx)<br /> .setAlias("key1")<br /> .setSubject(<br /> new X500Principal(String.format("CN=%s, OU=%s", alais,<br /> ctx.getPackageName())))<br /> .setSerialNumber(BigInteger.ONE).setStartDate(notBefore.getTime())<br /> .setEndDate(notAfter.getTime()).build();<br /><br />KeyPairGenerator kpGenerator = KeyPairGenerator.getInstance("RSA", "AndroidKeyStore");<br />kpGenerator.initialize(spec);<br />KeyPair kp = kpGenerator.generateKeyPair();<br /><br />// in another part of the app, access the keys<br />KeyStore keyStore = KeyStore.getInstance("AndroidKeyStore");<br />keyStore.load(null);<br />KeyStore.PrivateKeyEntry keyEntry = (KeyStore.PrivateKeyEntry)keyStore.getEntry("key1", null);<br />RSAPublicKey pubKey = (RSAPublicKey)keyEntry.getCertificate().getPublicKey();<br />RSAPrivateKey privKey = (RSAPrivateKey) keyEntry.getPrivateKey();<br /></pre><br />If the device has a hardware-backed key store implementation, keys will be generated outside of the Android OS and won't be directly accessible even to the system (or root user). If the implementation is software only, keys will be encrypted with a per-user key-encryption master key. We'll discuss key protection in detail later.<br /><h3>Android 4.3 implementation</h3></div><div>This hardware-backed design was <a href="http://nelenkov.blogspot.jp/2012/07/jelly-bean-hardware-backed-credential.html">initially implemented</a> in the original Jelly Bean release (4.1), so what's new here? Credential storage has traditionally (since the Donut days), been implemented as a native <code>keystore</code> daemon that used a local socket as its IPC interface. The daemon has finally been retired and replaced with a 'real' Binder service, which implements the <a href="https://android.googlesource.com/platform/frameworks/base/+/android-4.3_r2.1/core/java/android/security/IKeystoreService.java">IKeyStoreService</a> interface. What's interesting here is that the service is implemented in C++, which is somewhat rare in Android. See the interface definition for details, but compared to the original&nbsp;<code>keymaster</code>-based implementation, <code>IKeyStoreService</code> gets 4 new operations: <code>getmtime()</code>, <code>duplicate()</code>, <code>is_hardware_backed()</code> and <code>clear_uid()</code>. As expected, <code>getmtime()</code> returns the key modification time and <code>duplicate()</code> copies a key blob (used internally for key migration). <code>is_hardware_backed</code> will query the underlying <code>keymaster</code> implementation and return <code>true</code> when it is hardware-backed. The last new operation,&nbsp;<code>clear_uid(),</code> is a bit more interesting. As we mentioned, the key store now supports multi-user devices and each user gets their own set of keys, stored in <code>/data/misc/keystore/user_N</code>, where <code>N</code> is the Android user ID. Keys names (aliases) are mapped to filenames as before, and the owner app UID now reflects the Android user ID as well. When an app that owns key store-managed keys is uninstalled for a user, only keys created by that user are deleted. If an app is completely removed from the system, its keys are deleted for all users. Since key access is tied to the app UID, this prevents a different app that happens to get the same UID from accessing an uninstalled app's keys. Key store reset, which deletes both key files and the master key, also affects only the current user. Here's how key files for the primary user might look like:<br /><br /><pre>1000_CACERT_ca<br />1000_CACERT_cacert<br />10248_USRCERT_myKey<br />10248_USRPKEY_myKey<br />10293_USRCERT_rsa_key0<br />10293_USRPKEY_rsa_key0<br /></pre><br />The actual files are owned by the <code>keystore</code> service (which runs as the <code>keystore</code> Linux user) and it checks the calling UID to decide whether to grant or deny access to a key file, just as before. If the keys are protected by hardware, key files may contain only a reference to the actual key and deleting them may not destroy the underlying keys. Therefore, the <code>del_key()</code> operation is optional and may not be implemented. <br /><h3>The hardware in 'hardware-backed'</h3>To give some perspective to the whole 'hardware-backed' idea, let's briefly discuss how it is implemented on the Nexus 4. As you may now, the Nexus 4 is based on Qualcomm's Snapdragon S4 Pro APQ8064 SoC. Like most recent ARM SoC's it is <a href="http://www.arm.com/products/processors/technologies/trustzone.php">TrustZone</a>-enabled and Qualcomm implement their Secure Execution Environment (QSEE) on top of it. Details are, as usual, quite scarce, but trusted application are separated from the main OS and the only way to interact with them is through the controlled interface the <code>/dev/qseecom</code> device provides. Android applications that wish to interact with the QSEE load the proprietary <code>libQSEEComAPI.so</code> library and use the functions it provides to send 'commands' to the QSEE. As with most other SEEs, the <code>QSEECom</code> communication API is quite low-level and basically only allows for exchanging binary blobs (typically commands and replies), whose contents entirely depends on the secure app you are communicating with. In the case of the Nexus 4 <code>keymaster</code>, the used commands are: <code>GENERATE_KEYPAIR</code>, <code>IMPORT_KEYPAIR</code>, <code>SIGN_DATA</code> and <code>VERIFY_DATA</code>. The <code>keymaster</code> implementation merely creates command structures, sends them via the <code>QSEECom</code> API and parses the replies. It does not contain any cryptographic code itself.<br /><br />An interesting detail is that, the QSEE keystore trusted app (which may not be a dedicated app, but part of more general purpose trusted application) doesn't return simple references to protected keys, but instead uses proprietary encrypted key blobs (not unlike <a href="http://www.thales-esecurity.com/products-and-services/products-and-services/hardware-security-modules"><strike>nCipher</strike> Thales HSMs</a>). In this model, the only thing that is actually protected by hardware is some form of 'master' key-encryption key (KEK), and user-generated keys are only indirectly protected by being encrypted with the KEK. This allows for practically unlimited number of protected keys, but has the disadvantage that if the KEK is compromised, all externally stored key blobs are compromised as well (of course, the actual implementation might generate a dedicated KEK for each key blob created or the key can be fused in hardware; either way no details are available). Qualcomm <code>keymaster</code> key blobs are defined in AOSP code as shown below. This suggest that private exponents are encrypted using AES, most probably in CBC mode, with an added HMAC-SHA256 to check encrypted data integrity. Those might be further encrypted with the Android key store master key when stored on disk.<br /><br /><pre>#define KM_MAGIC_NUM (0x4B4D4B42) /* "KMKB" Key Master Key Blob in hex */<br />#define KM_KEY_SIZE_MAX (512) /* 4096 bits */<br />#define KM_IV_LENGTH (16) /* AES128 CBC IV */<br />#define KM_HMAC_LENGTH (32) /* SHA2 will be used for HMAC */<br /><br />struct qcom_km_key_blob {<br /> uint32_t magic_num;<br /> uint32_t version_num;<br /> uint8_t modulus[KM_KEY_SIZE_MAX];<br /> uint32_t modulus_size;<br /> uint8_t public_exponent[KM_KEY_SIZE_MAX];<br /> uint32_t public_exponent_size;<br /> uint8_t iv[KM_IV_LENGTH];<br /> uint8_t encrypted_private_exponent[KM_KEY_SIZE_MAX];<br /> uint32_t encrypted_private_exponent_size;<br /> uint8_t hmac[KM_HMAC_LENGTH];<br />};<br /></pre><br />So, in the case of the Nexus 4, the 'hardware' is simply the ARM SoC. Are other implementations possible? Theoretically, a hardware-backed <code>keymaster</code> implementation does not need to be based on TrustZone. Any dedicated device that can generate and store keys securely can be used, the usual suspects being embedded secure elements (SE) and TPMs. However, there are no mainstream Android devices with dedicated TPMs and recent flagship devices have began shipping <a href="http://www.nfcworld.com/2013/07/30/325212/no-secure-element-in-new-nexus-7/">without embedded SEs</a>, most probably due to carrier pressure (price is hardly a factor, since embedded SEs are usually in the same package as the NFC controller). Of course, all mobile devices have some form of&nbsp;<a href="http://en.wikipedia.org/wiki/UICC" style="-webkit-transition: color 0.3s; background-color: white; color: #009eb8; display: inline; font-family: 'Helvetica Neue Light', HelveticaNeue-Light, 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 14px; line-height: 19px; outline: none; text-align: justify; text-decoration: none; transition: color 0.3s;" target="_blank">UICC</a>&nbsp;(SIM card), which typically can generate and store keys, so why not use that? Well, Android still doesn't have a standard API to access the UICC, even though 'vendor' firmwares often include one. So while one could theoretically implement a UICC-based <code>keymaster</code> module compatible with the UICC's of your friendly neighbourhood MNO, it is not very likely to happen.<br /><h3>Security level</h3>So how secure are you brand new hardware-backed keys? The answer is, as usual, it depends. If they are stored in a real, dedicated, tamper-resistant hardware module, such as an embedded SE, they are as secure as the SE. And since this technology has been around for over 40 years, and even <a href="https://srlabs.de/rooting-sim-cards/">recent attacks</a>&nbsp;are only effective against SEs using weak encryption algorithms, that means fairly secure. Of course, as we mentioned in the previous section, there are no current <code>keymaster</code> implementations that use actual SEs, but we can only hope.<br /><br />What about TrustZone? It is being aggressively <a href="http://www.arm.com/files/pdf/Tech_seminar_TrustZone_v7_PUBLIC.pdf">marketed</a>&nbsp;as a mobile security 'silver bullet' and streaming media companies have embraced it as an 'end-to-end' DRM solution, but does it really deliver? While the ARM TrustZone architecture might be sound at its core, in the end trusted applications are just software that runs at a slightly lower level than Android. As such, they can be readily reverse engineered, and of course vulnerabilities have been <a href="http://blog.azimuthsecurity.com/2013/04/unlocking-motorola-bootloader.html">found</a>. And since they run within the Secure World they can effectively access everything on the device, including other trusted applications. When exploited, this could lead to very effective and hard to discover <a href="https://www.hackinparis.com/sites/hackinparis.com/files/Slidesthomasroth.pdf">rootkits</a>. To sum this up, while TrustZone secure applications might provide effective protection against Android malware running on the device, given physical access, they, as well as the TrustZone kernel, are exploitable themselves. Applied to the Android key store, this means that if there is an exploitable vulnerability in any of the underlying trusted applications the <code>keymaster</code> module depends on, key-encryption keys could be extracted and 'hardware-backed' keys could be compromised.</div><div><h3>Advanced usage</h3></div><div>As we mentioned in the first section, Android 4.3 offers a well defined public API to the system key store. It should be sufficient for most use cases, but if needed you can connect to the <code>keystore</code> service directly (as always, not really recommended). Because it is not part of the Android SDK, the <code>IKeyStoreService</code> doesn't have wrapper 'Manager' class, so if you want to get a handle to it, you need to get one directly from the <code>ServiceManager</code>. That too is hidden from SDK apps, but, as usual, you can use reflection. From there, it's just a matter of calling the interface methods you need (see <a href="https://github.com/nelenkov/android-keystore">sample project</a> on Github). Of course, if the calling UID doesn't have the necessary permission, access will be denied, but most operations are available to all apps.<br /><br /><pre>Class smClass = Class.forName("android.os.ServiceManager");<br />Method getService = smClass.getMethod("getService", String.class);<br />IBinder binder = (IBinder) getService.invoke(null, "android.security.keystore");<br />IKeystoreService keystore = IKeystoreService.Stub.asInterface(binder);<br /></pre><br />By using the <code>IKeyStoreService</code> directly you can store symmetric keys or other secret data in the system key store by using the <code>put()</code> method, which the current <code>java.security.KeyStore</code> implementation does not allow (it can only store <code>PrivateKey</code>'s). Such data is only encrypted by the key store master key, and even the system key store is hardware-backed, data is not protected by hardware in any way.<br /><br />Accessing hidden services is not the only way to augment the system key store functionality.&nbsp;Since the <code>sign()</code>&nbsp;operation implements a 'raw' signature operation (RSASP1 in <a href="http://www.ietf.org/rfc/rfc3447.txt">RFC 3447</a>), key store-managed (including hardware-backed) keys can be used to implement signature algorithms not natively supported by Android. You don't need to use the <code>IKeyStoreService</code> interface, because this operation is available through the standard JCE <code>Cipher</code> interface: <br /><br /><pre>KeyStore ks = KeyStore.getInstance("AndroidKeyStore");<br />ks.load(null);<br />KeyStore.Entry keyEntry = keyStore.getEntry("key1", null);<br />RSAPrivteKey privKey = (RSAPrivateKey) keyEntry.getPrivateKey();<br /><br />Cipher c = Cipher.getInstance("RSA/ECB/NoPadding");<br />cipher.init(Cipher.ENCRYPT_MODE, i privateKey);<br />byte[] result = cipher.doFinal(in, o, in.length);<br /></pre><br />If you use this primitive to implement, for example, <a href="http://www.bouncycastle.org/">Bouncy Castle</a>'s <code>AsymmetricBlockCipher</code> interface, you can use any signature algorithm available in the Bouncy Castle lightweight API (we actually use <a href="http://rtyley.github.io/spongycastle/">Spongy Castle</a> to stay compatible with Android 2.x without too much hastle). For example, if you want to use a more modern (and provably secure) signature algorithm than Android's default PKCS#1.5 implementation, such as RSA-PSS you can accomplish it with something like this (see <a href="https://github.com/nelenkov/android-keystore">sample project</a> for <code>AndroidRsaEngine</code>):<br /><br /><pre>AndroidRsaEngine rsa = new AndroidRsaEngine("key1", true);<br /><br />Digest digest = new SHA512Digest();<br />Digest mgf1digest = new SHA512Digest();<br />PSSSigner signer = new PSSSigner(rsa, digest, mgf1digest, 512 / 8);<br />RSAKeyParameters params = new RSAKeyParameters(false,<br /> pubKey.getModulus(), pubKey.getPublicExponent());<br /><br />signer.init(true, params);<br />signer.update(signedData, 0, signedData.length);<br />byte[] signature = signer.generateSignature();<br /></pre><br />Likewise, if you need to implement RSA key exchange, you can easily make use of OAEP padding like this: <br /><br /><pre>AndroidRsaEngine rsa = new AndroidRsaEngine("key1", false);<br /><br />Digest digest = new SHA512Digest();<br />Digest mgf1digest = new SHA512Digest();<br />OAEPEncoding oaep = new OAEPEncoding(rsa, digest, mgf1digest, null);<br /><br />oaep.init(true, null);<br />byte[] cipherText = oaep.processBlock(plainBytes, 0, plainBytes.length);<br /></pre><br />The <a href="https://github.com/nelenkov/android-keystore">sample application</a> shows how to tie all of those APIs together and features an elegant and fully Holo-compatible user interface:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-gnTCgqC89AI/UhOBYxf1csI/AAAAAAAAPQw/rdHISlhAnsg/s1600/android-keystore-43.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-gnTCgqC89AI/UhOBYxf1csI/AAAAAAAAPQw/rdHISlhAnsg/s400/android-keystore-43.png" height="640" width="384" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"></div><br />An added benefit of using hardware-backed keys is that, since they are not generated using Android's default <code>SecureRandom</code> implementation, they should not be affected by the recently announced <a href="http://android-developers.blogspot.jp/2013/08/some-securerandom-thoughts.html"><code>SecureRandom</code> vulnerability</a>&nbsp;(of course, since the implementation is closed, we can only hope that trusted apps' RNG actually works...). However, Bouncy Castle's PSS and OAEP implementations do use <code>SecureRandom</code> internally, so you might want to seed the PRNG 'manually' before starting your app to make sure it doesn't start with the same PRNG state as other apps. The <code>keystore</code> daemon/service uses&nbsp;<code>/dev/urandom</code> directly as a source of randomness, when generating master keys used for key file encryption, so they should not be affected. RSA keys generated by the <code>softkeymaster</code> OpenSSL-based software implementation might be affected, because OpenSSL uses <code>RAND_bytes()</code> to generate primes, but are probably OK since the <code>keystore</code> daemon/service runs in a dedicated process and the OpenSSL PRNG automatically seeds itself from <code>/dev/urandom</code> on first access (unfortunately there are no official details about the 'insecure SecureRandom' problem, so we can't be certain).<br /><h3>Summary</h3></div><div>Android 4.3 offers a standard SDK API for generating and accessing app-private RSA keys, which makes it easier for non-system apps to store their keys securely, without implementing key protection themselves. The new Jelly Bean also offers hardware-backed key storage on supported devices, which guarantees that even system or root apps cannot extract the keys. Protection against physical access attacks depends on the implementation, with most (all?) current implementations being TrustZone-based. Low-level RSA operations with key store managed keys are also possible, which enables apps to use cryptographic algorithms not provided by Android's built-in JCE providers.</div><div><br /></div>Nikolay Elenkovhttps://plus.google.com/117221066931981967754noreply@blogger.com123tag:blogger.com,1999:blog-2873091912851440312.post-88642380294453580492013-07-24T11:51:00.002+09:002013-09-20T23:35:14.356+09:00Building a wireless Android device using BeagleBone BlackOur <a href="http://nelenkov.blogspot.com/2013/04/android-code-signing.html">previous</a> <a href="http://nelenkov.blogspot.com/2013/05/code-signing-in-androids-security-model.html">posts</a> were about code signing in Android, and they turned out to be surprisingly relevant with the announcement of the '<a href="http://bluebox.com/corporate-blog/bluebox-uncovers-android-master-key">master key'</a> code signing Android vulnerability. While details are yet to be formally released, it has been already <a href="https://jira.cyanogenmod.org/browse/CYAN-1602">patched</a> and <a href="http://www.saurik.com/id/17">dissected</a>, so we'll skip that one and try something different for a change. This post is not directly related to Android security, but will discuss some Android implementation details, so it might be of some interest to our regular readers. Without further ado, let's get closer to the metal than usual and build a wireless Android device (almost) from scratch.<br /><h2>Board introduction -- BeagleBone Black</h2><div>For our device we'll use the recently released <a href="http://beagleboard.org/Products/BeagleBone%20Black">BeagleBone Black</a> board. So what is a BeagleBone Black (let's call it BBB from now on), then? It's the latest addition to the ranks of ARM-based, single board credit-card-sized computers. It comes with an <a href="http://www.ti.com/product/am3359">AM335x 1GHz ARM Cortex-A8 </a>CPU,&nbsp;512MB RAM, &nbsp;2GB on-board eMMC flash, Ethernet, HDMI and USB ports, plus a whole lot of I/O pins. Best of all, it's open source hardware, and all schematics and design documents are <a href="https://github.com/CircuitCo/BeagleBone-Black-RevA5B/">freely available</a>. It's hard to beat the price of $45 and it looks much, much better than the jagged Raspberry Pi. It comes with <a href="http://www.angstrom-distribution.org/">Angstrom</a> Linux pre-installed, but can run pretty much any Linux flavour, and of course, Android. It is being used for anything from <a href="http://learn.adafruit.com/blinking-an-led-with-beaglebone-black/overview">blinking LEDs</a> to <a href="http://travisgoodspeed.blogspot.jp/2013/07/hillbilly-tracking-of-low-earth-orbit.html">tracking satellites</a>. You can hook it up to circuits you've build or quickly extend it using one of the many '<a href="http://circuitco.com/support/index.php?title=BeagleBone_Capes">cape</a>' plug-in boards available. We'll use a couple of those for our project, so 'building' refers mostly to creating an Android build compatible with our hardware. We'll detail the hardware later, but let's first outline some simple requirements for our mobile Android device:</div><div><ol><li>touch screen input</li><li>wireless connectivity via WiFi</li><li>battery powered</li></ol><div>Here's what we start with:<br /><br /></div><div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-BIoretDmICI/Ue6ry1zNfuI/AAAAAAAAPB4/AGB554u9pCo/s1600/bbb-scaled.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="288" src="http://3.bp.blogspot.com/-BIoretDmICI/Ue6ry1zNfuI/AAAAAAAAPB4/AGB554u9pCo/s400/bbb-scaled.png" width="400" /></a></div><h2>Building a kernel for Android</h2></div></div>Android support for AM335x-based devices is provided by the <a href="https://code.google.com/p/rowboat/">rowboat</a> project. It integrates the required kernel and OS patches and provides build configurations for each of the supported devices, including the BBB. The latest version is based on Android 4.2.2 and if you want to get started quickly, you can download a <a href="http://downloads.ti.com/sitara_android/esd/TI_Android_DevKit/TI_Android_JB_4_2_2_DevKit_4_1_1/exports/TI_Android_JB_4.2.2_DevKit_4.1.1_beagleboneblack.tar.gz">binary build</a> from TI's Sitara Android <a href="http://downloads.ti.com/sitara_android/esd/TI_Android_DevKit/TI_Android_JB_4_2_2_DevKit_4_1_1/index_FDS.html">development kit page</a>. All you need to do is flash it to an SD card, connect the BBB to an HDMI display and power it on. You will instantly get a fully working, hardware-accelerated Jelly Bean 4.2 device you can control using standard USB keyboard and mouse. If that is all you need, you might as well stop reading here. Our first requirement, however is a working touch screen, not an HDMI monitor, so we have some work to do. As it happens, a number of LCD capes are already available for the BBB (from <a href="http://circuitco.com/">circuitco</a> and others), so those are our first choice. We opted for the <a href="http://circuitco.com/support/index.php?title=BeagleBone_LCD4">LCD4</a> 4.3" cape which offers almost reasonable resolution and is small enough to be directly attached to the BBB. Unfortunately it doesn't work with the <i>rowboat</i> build from TI. To understand why, let's take a step back and discuss how the BBB supports extension hardware, including capes.<br /><div><h3>Linux Device Tree and cape support</h3><div><div>If you look at the expansion header pinout table in the BBB <a href="https://github.com/CircuitCo/BeagleBone-Black-RevA5B/blob/master/BBB_SRM.pdf?raw=true">reference manual</a>, you will notice that each pin can serve multiple purposes, depending on configuration. This is called 'pinmuxing' and is the method modern SoC's use to multiplex multiple peripheral functions to a limited set of physical pins. The AM335x CPU the BBB uses is no exception: it has pins with up to 8 possible peripheral functions. So, in order for a cape to work, the SoC needs to be configured to use the correct inputs/outputs for that cape. The situation becomes more complicated when you have multiple capes (up to 4 at a time). BBB capes solve this by using EEPROM that stores enough data to identify the cape, its revision and serial number. At boot time, the kernel identifies the capes by reading their EEPROMs, computes the optimal configuration (or outputs and error if the connected capes are not compatible) and sets the expansion header pinmux accordingly. Initially, this was implemented in a 'board file' in the Linux kernel, and adding a new cape required modifying the kernel and making sure all possible cape configurations were supported. Needless to say, this is not an easy task, and getting it merged into Linux mainline is even harder. Since everyone is building some sort of ARM device nowadays, the number of board files and variations thereof reached critical mass, and Linux kernel maintainers decided to decouple board specific behaviour from the kernel. The mechanism for doing this is called Device Tree (DT) and its goal is to make life easier for both device developers (no need to hack the kernel for each device) and kernel maintainers (no need to merge board-specific patches every other day). A DT is a data structure for describing hardware which is passed to the kernel at boot time. Using the DT, a generic board driver can configure itself dynamically. The BBB ships with a <a href="http://linuxgizmos.com/introducing-the-new-beaglebone-black-kernel/">3.8 kernel </a>and takes full advantage of the new DT architecture. Cape support is naturally implemented using DT source (DTS) files and even goes a step further than mainline Linux by introducing a <a href="http://elinux.org/Capemgr">Cape Manager</a>,&nbsp;an in-kernel&nbsp;mechanism for dynamically loading <a href="http://elinux.org/Device_Trees">Device Tree</a> fragments from userspace. This allows for runtime (vs. boot time) loading of capes via <code>sysfs</code>, resource conflict resolution (where possible), manual control over already loaded capes and more.</div><div><br /></div><div>Going back to Android, the <i>rowboat</i> Android port is using the 3.2 kernel and relies on manual porting of extension peripheral configuration to the kernel board file. As it happens, support for our LCD4 cape is not there yet. We could try to patch the kernel based on the 3.8 DTS files, or take the plunge and attempt to run Android using 3.8. Since all BBB active development is going on in the <a href="https://github.com/beagleboard/kernel/commits/3.8">3.8 branch</a>, using the newer version is the better (if more involved) choice.<br /><h3>Using the 3.8 kernel</h3></div></div></div>As we know, Android adds a bunch of 'Androidisms' to the Linux kernel, most notably wakelocks, alarm timers, ashmem, binder, low memory killer and 'paranoid' network security. Thus you could not use a vanilla Linux kernel as is to run Android until recently, and a number of Android-specific patches needed to be applied first. Fortunately, thanks to the <a href="http://elinux.org/Android_Mainlining_Project">Android Mainlining Project</a>, most of these features are already merged (in one form or another) in the 3.8 kernel and are available as staging drivers. What this means is that we can take a 3.8 kernel that works well on the BBB and use it run Android. Unfortunately, the BBB can't quite use a vanilla 3.8 kernel yet and requires quite a few patches (including Cape Manager). However, building a 3.8 kernel with all BBB patches applied is not too hard to do, thanks to <a href="http://www.eewiki.net/display/linuxonarm/BeagleBone+Black#BeagleBoneBlack-LinuxKernel">instructions</a> and build scripts by <a href="https://github.com/RobertCNelson/">Robert Nelson</a>. Even better, <a href="http://icculus.org/~hendersa/">Andrew Henderson</a> has successfully used it in Android and has detailed the <a href="http://procedure./">procedure.</a>&nbsp;Following Andrew's build instructions, we can create an Android build that has a good chance of supporting our touch screen. As Andrew's article mentions, hardware acceleration (support for the BBB's&nbsp;PowerVR SGX 530 GPU)<span style="background-color: white; color: #222222; font-family: Verdana, Arial, Helvetica, sans-serif; font-size: 13px; line-height: 19px;">&nbsp;is not yet available for the 3.8 kernel, so we need to disable it in our build.&nbsp;</span>One thing that is missing from Andrew's instruction is that you also need to disable building and installing of the SGX drivers, otherwise Android will try to use them at boot and fail to start SurfaceFlinger due to driver-kernel module incompatibility. You can do this by commenting out the dependency on <code>sgx</code> in rowboat's top-level <code>Makefile</code> like this:<br /><br /><pre>@@ -11,7 +13,7 @@<br /> CLEAN_RULE = sgx_clean wl12xx_compat_clean kernel_clean clean<br /> else<br /> ifeq ($(TARGET_PRODUCT), beagleboneblack)<br />-rowboat: sgx<br />+#rowboat: sgx<br /> CLEAN_RULE = sgx_clean kernel_clean clean<br /> else<br /> ifeq ($(TARGET_PRODUCT), beaglebone)<br /><br /></pre><br />Note that the kernel alone is not enough though: the boot loader (<a href="http://www.denx.de/wiki/U-Boot">Das U-Boot)</a>&nbsp;needs to be able to load the (flattened) device tree blob, so we need to build a recent version of that as well. Android seems to run OK with this configuration, but there are still a few things that are missing. The first you might notice is ADB support.<br /><h3>ADB support</h3><div>ADB (Android Debug Bridge) is one of the best things to came out of the Android project, and if you have been doing Android development in any form for a while, you probably take it for granted. It is a fairly complex piece of software though, providing support for debugging, file transfer, port forwarding and more and requires kernel support in addition to the Android daemon and client application. In kernel terms this is known as the 'Android USB Gadget Driver', and it is not quite available in the 3.8 kernel, even though there have been multiple attempts at merging it. We can merge the required bits from Google's 3.8 kernel tree, but since we are trying to stay as close as possible to the original BBB 3.8 kernel, we'll use a different approach. While attempts to get ADB in the mainline continue, <a href="http://cateee.net/lkddb/web-lkddb/USB_FUNCTIONFS.html">Function Filesystem</a> (FunctionFS) driver support has been added to Android's ADB and we can use that instead of the 'native' Android gadget. To use ADB with FunctionFS:</div><div><ol><li>Configure FunctionFS support in the kernel (<code>CONFIG_USB_FUNCTIONFS=y</code>):</li><ul><pre>Device Drivers -&gt; USB Support -&gt; <br /> USB Gadget Support -&gt; USB Gadget Driver -&gt; Function Filesystem<br /></pre></ul><li>Modify the boot parameters in uEnv.txt to set the vendor and product IDs, as well as the device serial number</li><ul><pre>g_ffs.idVendor=0x18d1 g_ffs.idProduct=0x4e26 g_ffs.iSerialNumber=&lt;serial&gt;</pre></ul><li>Setup the FunctionFS directory and mount it in your <code>init.am335xevm.usb.rc</code> file:</li><ul><pre>on fs<br /> mkdir /dev/usb-ffs 0770 shell shell<br /> mkdir /dev/usb-ffs/adb 0770 shell shell<br /> mount functionfs adb /dev/usb-ffs/adb uid=2000,gid=2000<br /></pre></ul><li>Delete all lines referencing <code>/sys/class/android_usb/android0/*</code>. (Those nodes are created by the native Android gadget driver and are not available when using FunctionFS.)<br /></li></ol>Once this is done, you can reboot and you should see your device using <code>adb devices</code> soon after the kernel has loaded. Now you can debug the OS using Eclipse and push and install files directly using ADB. That said, this won't help you at all if the device doesn't boot due to some kernel misconfiguration, so you should definitely get an <a href="http://www.ftdichip.com/Products/Cables/USBTTLSerial.htm">FTDI cable</a> (the BBB does not have an on-board FTDI chip) to be able to see kernel messages during boot and get an 'emergency' shell when necessary.<br /><h3>cgroups patch</h3></div><div>If you are running <code>adb logcat</code> in a console and experimenting with the device, you will notice a lot of 'Failed setting process group' warnings like this one: <br /><br /><pre>W/ActivityManager( 349): Failed setting process group of 4911 to 0<br />W/SchedPolicy( 349): add_tid_to_cgroup failed to write '4911' (Permission denied);<br /></pre><br />Android's <code>ActivityManager</code> uses Linux control groups (<a href="https://www.kernel.org/doc/Documentation/cgroups/cgroups.txt">cgroups)</a> to run processes with different priorities (background, foreground, audio, system) by adding them to scheduling groups. In the mainline kernel this is only allowed to processes running as <code>root</code> (<code>EUID=0</code>), but Android changes this behaviour (naturally, with a patch) to only require the <code>CAP_SYS_NICE</code> capability, which allows the <code>ActivityManager</code> (running as <code>system</code> in the <code>system_server</code> process) to add app processes to scheduling groups. To get rid of this warning, you can disable scheduling groups by commenting out the code that sets up <code>/dev/cpuctl/tasks</code> in <code>init.rc</code>, or you can merge the modified functionality form Google's experimental 3.8 branch (which we've been trying to avoid all along...). <br /><h2>Android hardware support</h2><h3>Touchscreen</h3>We now have a functional Android development device running mostly without warnings, so it's time to look closer at requirement #1. As we mentioned, once we disable hardware acceleration, the LCD4 works fine with our 3.8 kernel, but a few things are still missing. The LCD4 comes with 5 directional GPIO buttons which are somewhat useful because scrolling on a resistive touchscreen takes some getting used to, but that is not the only thing they can be used for. We can remap them as Android system buttons (Back, Home, etc) by providing a <a href="http://source.android.com/devices/tech/input/key-layout-files.html">key layout</a> (.kl) file like this one: <br /><br /><pre>key 105 BACK WAKE<br />key 106 HOME WAKE<br />key 103 MENU WAKE<br />key 108 SEARCH WAKE<br />key 28 POWER WAKE<br /></pre><br />The GPIO keypad on the LCD identifies itself as 'gpio.12' (you can check this using the <code>getevent</code> command), so we need to name the layout file to 'gpio_keys_12.kl'. To achieve this we modify <code>device.mk</code> in the BBB device directory (<code>device/ti/beagleboneblack</code>): <br /><br /><pre>...<br /># KeyPads<br />PRODUCT_COPY_FILES += \<br /> $(LOCAL_PATH)/gpio-keys.kl:system/usr/keylayout/gpio_keys_12.kl \<br />...<br /></pre><br />Now that we are using hardware buttons, we might want to squeeze some more screen real estate from the LCD4 by not showing the system navigation bar. This is done by setting&nbsp;<code>config_showNavigationBar</code> to <code>false</code> in the <code>config.xml</code> framework overlay file for our board:<br /><br /><pre>&lt;bool name="config_showNavigationBar"&gt;false&lt;/bool&gt;<br /></pre><br />While playing with the screen, we notice that it's a bit dark. Increasing the brightness via the display settings however does not seem to work. A friendly error message in logcat tells us that Android can't open the <code>/sys/class/backlight/pwm-backlight/brightness</code> file. Screen brightness and LEDs are controlled by the <code>lights</code> module on Android, so that's where we look first. There is a a hardware-specific one under the beagleboneblack device directory, but it only supports the LCD3 and LCD7 displays. Adding support for the LCD4 is simply a matter of finding the file that controls brightness under /sys. For the LCD4 it's called <code>/sys/class/backlight/backlight.10/brightness</code> and works exactly like the other LCDs -- you get or set the brightness by reading or writing the backlight intensity level (0-100) as a string. We modify <code>light.c</code> (full source on <a href="https://github.com/nelenkov/android_device_ti_beagleboneblack/blob/master/liblights/lights.c">Github)</a> to first try the LCD4 device and voila -- setting the brightness via the Android UI now works... not. It turns out the <code>brightness</code> file is owned by <code>root</code> and the Settings app doesn't have permission to write to it. We can change this permission in the board's&nbsp;<code>init.am335xevm.rc file</code>:<br /><br /><pre># PWM-Backlight for display brightness on LCD4 Cape<br />chmod 0666 /sys/class/backlight/backlight.10<br /></pre><br />This finally settles it, so we can cross requirement #1 off our list and try to tackle #2 -- wireless support.<br /><h3>WiFi adapter</h3></div><div>The BBB has an onboard Ethernet port and it is supported out of the box by the <i>rowboat</i> build. If we want to make our new Android device mobile though, we need to add either a WiFi adapter or 3G modem. 3G support is possible, but somewhat more involved, so we will try to enable WiFi first. There are a <a href="http://beagleboardtoys.info/index.php?title=BeagleBone_TiWi-BLE_w/_Chip_Antenna">number </a>of <a href="http://beagleboardtoys.info/index.php?title=BeagleBone_TiWi-5E_w/_Chip_Antenna">capes</a> that provide WiFi and Bluetooth for the original BeagleBone, but they are not compatible with the BBB, so we will try using a regular WiFi dongle instead. As long as it has a Linux driver, it should be quite easy to wire it to Android by following the TI <a href="http://processors.wiki.ti.com/index.php/TI-Android-JB-PortingGuide#WLAN">porting guide</a>, right?<br /><br /></div><div></div><div>We'll use a WiFi dongle from <a href="http://lm-technologies.com/">LM Technolgies</a> based on the Realtek&nbsp;<a href="http://www.realtek.com.tw/products/productsView.aspx?Langid=1&amp;PFid=48&amp;Level=5&amp;Conn=4&amp;ProdID=274">RTL8188CUS</a>&nbsp;chipset which is supported by the Linux <code>rtl8192cu</code> driver. In addition to the kernel driver, this wireless adapter requires a binary firmware blob, so we need to make sure it's loaded along with the kernel modules. But before getting knee-deep into makefiles, let's briefly review the Android WiFi architecture. Like most hardware support in Android, it consists of a kernel layer (WiFi adapter driver modules), native daemon (<code>wpa_supplicant</code>), HAL (<code>wifi.c</code> in <code>libharware_legacy</code>, communicates with <code>wpa_supplicant</code> via its control socket), a framework service and its public interface (<code>WifiService</code> and <code>WifiManager</code>) and application/UI ('WiFi' screen in the Settings app, as well as <code>SystemUI</code>, responsible for showing the WiFi status bar indicator). That may sound fairly straightforward, but the <code>WifiService</code> implements some pretty complex state transitions in order to manage the underlying native WiFi support. Why is all the complexity needed? Android doesn't load kernel modules automatically, so the&nbsp;<code>WifiStateMachine</code> will try to load kernel modules, find and load any necessary firmware, start the <code>wpa_supplicant</code> daemon, scan for and connect to an AP, obtain an IP address via DHCP, check for and handle captive portals, and finally, if you are lucky, set up the connection and send out a broadcast to notify the rest of the system of the new network configuration. The <code>wpa_supplicant</code> daemon alone can go through 13 <a href="http://developer.android.com/reference/android/net/wifi/SupplicantState.html">different states</a>, so things can get quite involved when those are combined.<br /><br />Going step-by-step through the porting guide, we first enable support for our WiFi adapter in the kernel. That results in 6 modules that need to be loaded in order, plus the firmware blob. The HAL (<code>wifi.c</code>) can only load a single module though, so we pre-load all modules in the board's <code>init.am335xevm.rc</code> and set the <code>wlan.driver.status</code> to <code>ok</code> in order to prevent <code>WifiService</code> from trying (and failing) to load the kernel module. We then define the <code>wpa_supplicant</code> and <code>dhcpd</code> services in the init file. Last, but not least, we need to set the <code>wifi.interface</code> property to <code>wlan0</code>, otherwise Android will silently try to use a test device and fail to start the <code>wpa_supplicant</code>. Both properties are set as <code>PRODUCT_PROPERTY_OVERRIDES</code> in <code>device/ti/beagleboneblack/device.mk</code> (see device directory on <a href="https://github.com/nelenkov/android_device_ti_beagleboneblack">Github)</a>. Here's how the relevant part from <code>init.am335xevm.rc</code> looks like: <br /><br /><pre>on post-fs-data<br /> # wifi<br /> mkdir /data/misc/wifi/sockets 0770 wifi wifi<br /> insmod /system/lib/modules/rfkill.ko<br /> insmod /system/lib/modules/cfg80211.ko<br /> insmod /system/lib/modules/mac80211.ko<br /> insmod /system/lib/modules/rtlwifi.ko<br /> insmod /system/lib/modules/rtl8192c-common.ko<br /> insmod /system/lib/modules/rtl8192cu.ko<br /><br />service wpa_supplicant /system/bin/wpa_supplicant \<br /> -iwlan0 -Dnl80211 -c/data/misc/wifi/wpa_supplicant.conf \<br /> -e/data/misc/wifi/entropy.bin<br /> class main<br /> socket wpa_wlan0 dgram 660 wifi wifi<br /> disabled<br /> oneshot<br /><br />service dhcpcd_wlan0 /system/bin/dhcpcd -ABKL<br /> class main<br /> disabled<br /> oneshot<br /><br />service iprenew_wlan0 /system/bin/dhcpcd -n<br /> class main<br /> disabled<br /> oneshot<br /><br /></pre><br />In order to build the <code>wpa_supplicant daemon</code>, we then set&nbsp;<code>BOARD_WPA_SUPPLICANT_DRIVER</code> and <code>WPA_SUPPLICANT_VERSION</code> in <code>device/ti/beagleboneblack/BoardConfig.mk</code>. Note the we are using the generic <code>wpa_supplicant</code>, not the TI-patched one and the <code>WEXT</code> driver instead of the <code>NL80211</code> one (which requires a proprietary library to be linked in). Since we are preloading driver kernel modules, we don't need to define <code>WIFI_DRIVER_MODULE_PATH</code> and <code>WIFI_DRIVER_MODULE_NAME</code>. <br /><br /><pre>BOARD_WPA_SUPPLICANT_DRIVER := WEXT<br />WPA_SUPPLICANT_VERSION := VER_0_8_X<br />BOARD_WLAN_DEVICE := wlan0<br /></pre><br />To make the framework aware of our new WiFi device, we change <code>networkAttributes</code> and <code>radioAttributes</code> in the <code>config.xml</code> overlay file. Getting this wrong will lead to Android's <code>ConnectionManager</code> totally ignoring WiFi even if you manage to connect and will result in the not too helpful 'No network connection' message. "1" here corresponds to the <code>ConnectivityManager.TYPE_WIFI</code> connection type (the built-in Ethernet connection is "9", <code>TYPE_ETHERNET</code>).<br /><br /><pre>&lt;string-array name="networkAttributes" translatable="false"&gt;<br />...<br /> &lt;item&gt;"wifi,1,1,1,-1,true"&lt;/item&gt;<br />...<br />&lt;/string-array&gt;<br />&lt;string-array name="radioAttributes" translatable="false"&gt;<br /> &lt;item&gt;"1,1"&lt;/item&gt;<br />...<br />&lt;/string-array&gt;<br /></pre><br />Finally, to make Android aware of our newly found WiFi features, we copy <code>android.hardware.wifi.xml</code> to <code>/etc/permissions/</code> by adding it to <code>device.mk</code>. This will take care of enabling the Wi-Fi screen in the <code>Settings</code> app:</div><br /><pre>PRODUCT_COPY_FILES := \<br />...<br /> frameworks/native/data/etc/android.hardware.wifi.xml:system/etc/permissions/android.hardware.wifi.xml \<br />...<br /></pre><br />After we've rebuild <i>rowboat </i>and updated the root file system, you should be able to turn on WiFi and connect to an AP. Make sure you are using an AC power supply to power the BBB, because the WiFi adapter can draw quite a bit of current and you may not get enough via the USB cable. If the board is not getting enough power, you might experience failure to scan, dropping connections and other weird symptoms even if your configuration is otherwise correct. If WiFi support doesn't work for some reason, check the following: <br /><ul><li>that the kernel module(s) and firmware (if any) is loaded (<code>dmesg</code>, <code>lsmod</code>)</li><li><code>logcat</code> output for relevant-lookin error messages</li><li>that the <code>wpa_supplicant</code> service is defined properly in <code>init.*.rc</code> and the daemon is started</li><li>that <code>/data/misc/wifi</code> and <code>wpa_supplicant.conf</code> are available and have the right owner and permissions (<code>wifi:wifi</code> and 0660)</li><li>that the <code>wifi.interface</code> and <code>wlan.driver.status</code> properties are set correctly</li><li>use your debugger if all else fails</li></ul>That was easy, right? We now have a working wireless connection, it's time to think about requirement #3, powering the device.<br /><h3>Battery power</h3><div>The BBB can be powered in three ways: via the miniUSB port, via the 5V AC adapter jack, and by using the power rail (<code>VDD_5V</code>) on the board directly. We can use any USB battery pack that provides enough current (~1A) and has enough capacity to keep the device going by simply connecting it to the miniUSB port. Those can be rather bulky and you will need an extra cable, so let's look for other options. As can be expected, there is a cape for that. The aptly named <a href="http://circuitco.com/support/index.php?title=BeagleBone_Battery">Battery Cape</a>&nbsp;plugs into the BBB's expansion connectors and provides power directly to the power rail. We can plug the LCD4 on top of it and get an integrated (if a bit bulky) battery-powered touchscreen device. The Battery Cape holds 4 AA batteries connected as two sets in parallel. It is not simply a glorified battery holder though -- it has a boost converter that can provide stable 1A current at 5V even if battery voltage fluctuates (1.8-5.5V). It does provide support for monitoring battery voltage via AIN4 input, but does not have a 'fuel gauge' chip so we can't display battery level in Android without adding additional circuitry. That is ways our mobile device cannot display the battery level (yet) and unfortunately won't be able to shut itself down when battery levels become critically low. That is something that definitely needs work, but for now we make the device always believe it's at 100% power by setting the <code>hw.nobattery</code> property to <code>true</code>. The alternative is to have it display the 'low battery' red warning icon all the time, so this approach is somewhat preferable. Four 1900 mAh batteries installed in the battery cape should provide enough power to run the device for a few hours even when using WiFi, so we can (tentatively) mark requirement #3 as fulfilled.<br /><h2>Flashing the device</h2></div><div>If you have been following&nbsp;Andrew Henderson's build <a href="http://icculus.org/~hendersa/android/">guide </a>linked above, you have been 'installing' Android on an SD card and booting the BBB from it. This works fine and makes it easy to fix things when Android won't load by simply mounting the SD card on your PC and editing or copying the necessary files. However, most consumer grade SD cards don't offer the best performance and can be quite unreliable. As we mentioned at the beginning of the post, the BBB comes with 2GB of built-in eMMC, which is enough to install Android and have some space left for a data partition. On most Android devices flashing can be performed by either booting into the recovery system or by using the <a href="http://en.wikipedia.org/wiki/Fastboot"><code>fastboot</code></a> tool over USB. The <i>rowboat</i> build does not have a recovery image, and while <code>fastboot</code> is supported by TI's fork of U-Boot, the version we are using to load the DT blob does not support <code>fastboot</code> yet. That leaves booting another OS in lieu of a recovery and flashing the eMMC form there, either manually or by using an automated&nbsp;<a href="http://www.crashcourse.ca/wiki/index.php/BBB_software_update_process">flasher image</a>. The flasher image simply runs a script at startup, so let's see how it works by doing it manually first. The latest BBB Angstrom&nbsp;<a href="http://downloads.angstrom-distribution.org/demo/beaglebone/">bootable image</a>&nbsp;(<b>not</b> the flasher one)&nbsp;is a good choice for our 'recovery' OS, because it is known to work on the BBB and has all the needed tools (<code>fdisk</code>, <code>mkfs.ext4</code>, etc.). After you <code>dd</code> it to an SD card, mount the card on your PC and copy the Android boot files and <code>rootfs</code>&nbsp;archive to an <code>android/</code> directory. You can then boot from the SD card, get a root shell on the Angstrom and install Android to the eMMC from there.<br /><br />Android devices typically have a <code>boot</code>, <code>system</code> and <code>userdata</code> parition, as well as a <code>recovery</code> one and optionally others. The boot partition contains the kernel and a ramdisk which gets mounted at the root of the device filesystem. <code>system</code> contains the actual OS files and gets mounted read-only at <code>/system</code>, while <code>userdata</code> is mounted read-write at <code>/data</code> and stores system and app data, as well user-installed apps. The partition layout used by the BBB is slightly different. The board ootloader will look for the first stage bootloader (SPL, named MLO in U-Boot) on the first FAT partition of the eMMC. It in turn will load the second state bootloader (<code>u-boot.img</code>) which will then search for a OS image according to its configuration. On embedded devices U-Boot configuration is typically stored as a set of variables in NAND, replaced by the <code>uEnv.txt</code> file on devices without NAND such as the BBB. Thus we need a FAT boot partition to host the SPL, <code>u-boot.img</code>, <code>uEnv.txt</code>, the kernel image and the DT blob. <code>system</code> and <code>userdata</code> will be formatted as EXT4 and will work as in typical Android devices.<br /><br />The default Angstrom installations creates only two partitions -- a DOS one for booting, and a Linux one that hosts Angstrom Linux. To prepare the eMMC for Android, you need to delete the Linux partition and create two new Linux partitions in its place -- one to hold Android system files and one for user data. If you don't plan to install too many apps, you can simply make them equal sized. When booting from the SD card, the eMMC device will be <code>/dev/block/mmcblk1</code>, with the first partition being <code>/dev/block/mmcblk1p1</code>, the second <code>/dev/block/mmcblk1p2</code> and so on. After creating those 3 partitions with <code>fdisk</code> we format them with their respective filesystems:<br /><br /><pre># mkfs.vfat -F 32 -n boot /dev/block/mmcblk1p1<br /># mkfs.ext4 -L rootfs /dev/block/mmcblk1p2 <br /># mkfs.ext4 -L usrdata /dev/block/mmcblk1p3 <br /></pre><br />Next, we mount <code>boot</code> and copy boot related files, then mount <code>rootfs</code> and untar the <code>rootfs.tar.bz2</code> archive. <code>usrdata</code> can be left empty, it will be populated on first boot.<br /><br /><pre># mkdir -p /mnt/1/<br /># mkdir -p /mnt/2/<br /># mount -t vfat /dev/block/mmcblk1p1 /mnt/1<br /># mount -t ext4 /dev/block/mmcblk1p2 /mnt/2<br /># cp MLO u-boot.img zImage uEnv.txt am335x-boneblack.dtb /mnt/1/<br /># tar jxvf rootfs.tar.bz2 -C /mnt/2/<br /># umount /mnt/1<br /># umount /mnt/2<br /></pre><br />With this, Android is installed on the eMMC and you can shutdown the 'recovery' OS, remove the SD card and boot from the eMMC. Note that the U-Boot used has been patched to probe whether the SD card is available and will automatically boot from it (without you needing to hold the BBB's user boot button), so if you don't remove the 'recovery' SD card, it will boot again.<br /><br />We now have a working, touch screen Android device with wireless connectivity. Here's how it looks in action:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-fjDjx-WkU8w/Ue6sFrk-prI/AAAAAAAAPCA/aQ8xn24iklg/s1600/bbb-finished-scaled.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="285" src="http://3.bp.blogspot.com/-fjDjx-WkU8w/Ue6sFrk-prI/AAAAAAAAPCA/aQ8xn24iklg/s400/bbb-finished-scaled.png" width="400" /></a></div><br />Our device is unlikely to win any design awards or replace your Nexus 7, but it could be used as the basis of &nbsp;dedicated Android devices, such as a wireless POS terminal or a SIP phone and extended even further by adding more capes or custom hardware as needed.</div><h2>Summary</h2><div>The BBB is fully capable of running Android and by adding off-the shelf peripherals you can easily turn it into a 'tablet' (of sorts) by adding a touch screen and wireless connectivity. While the required software is mostly available in the <i>rowboat</i> project, if you want to have the best hardware support you need to use BBB's native 3.8 kernel and configure Android to use it. Making hardware fully available to the Android OS is mostly a matter of configuring the relevant HAL bits properly, but that is not always straightforward, even with board vendor provided <a href="http://processors.wiki.ti.com/index.php/TI-Android-JB-PortingGuide">documentation</a>. The reason for this is that&nbsp;Android subsystems are not particularly cohesive -- you need to modify multiple, sometimes seemingly unrelated, files at different locations to get a single subsystem working. This is, of course, not specific to Android and is the price to pay for building a system by integrating originally unrelated OSS projects. On the positive side, most components can be replaced and the required changes can usually be confined to the (sometimes loosely defined) Hardware Abstraction Layer (HAL).&nbsp;</div>Nikolay Elenkovhttps://plus.google.com/117221066931981967754noreply@blogger.com27tag:blogger.com,1999:blog-2873091912851440312.post-70903810827996495752013-05-03T01:45:00.000+09:002014-03-14T23:34:57.851+09:00Code signing in Android's security modelIn the <a href="http://nelenkov.blogspot.jp/2013/04/android-code-signing.html">previous post</a>&nbsp;we introduced code signing as implemented in Android and saw that it is practically identical to JAR signing. Android requires all installed&nbsp;packages&nbsp;to be signed and makes heavy use of the attached code signing certificates in its security model. This is where the major differences with other platforms that use code signing lie, so we will explore the topic in more detail.<br /><h3>Java access control</h3><div>Before we start digging into Android's security model, let's go through a quick overview of the corresponding features of the Java platform. Java was initially designed to support running potentially untrusted code, downloaded from a public network (mostly applets). The initial applet sandbox model was extended to a more flexible, policy-based scheme where different permissions can be granted based on the code's origin and author. Code origin refers to the place where classes are loaded from, typically a local file or a remote URL, while authorship is asserted via code signatures and is represented by the signer's certificate chain. Combined those two properties define a&nbsp;<a href="http://docs.oracle.com/javase/7/docs/api/java/security/CodeSource.html">code source</a>. Each code source is granted a set of <a href="http://docs.oracle.com/javase/7/docs/api/java/security/Permissions.html">permissions</a> based on a <a href="http://docs.oracle.com/javase/7/docs/api/java/security/Policy.html">policy</a>, the default implementation being to read rules from a policy file (created with the <a href="http://docs.oracle.com/javase/7/docs/technotes/guides/security/PolicyGuide.html"><code>policytool</code></a>). At runtime a <a href="http://docs.oracle.com/javase/7/docs/api/java/lang/SecurityManager.html">security manager</a> (if installed) enforces access control by comparing code elements on the stack with the current policy. It throws a <a href="http://docs.oracle.com/javase/7/docs/api/java/lang/SecurityException.html">SecurityException</a> if the permissions required to access a resource have not been granted to the requesting code source. Java code that runs (or is started in) the browser, such as applets or <a href="http://www.oracle.com/technetwork/java/javase/overview-137531.html">Java Web Start </a>applications, is automatically run with a security manager installed, while for local applications you need to explicitly set the&nbsp;<code>java.security.manager</code>&nbsp;in order to install one. In practice, a security manager for local code is only used with some applications servers, and it is usually disabled by default. A wide range of <a href="http://docs.oracle.com/javase/7/docs/technotes/guides/security/permissions.html">permissions</a>&nbsp;are supported by the platform, the major ones being <a href="http://docs.oracle.com/javase/7/docs/api/java/io/FilePermission.html">file</a> and <a href="http://docs.oracle.com/javase/7/docs/api/java/net/SocketPermission.html">socket</a>-oriented, as well as different types of <a href="http://docs.oracle.com/javase/7/docs/api/java/lang/RuntimePermission.html">runtime permissions</a>&nbsp;which control operations ranging from class and library loading to managing the current security manager. By defining multiple code sources and assigning each one specific permissions one can implement fine grained access control for both local and remote code.<br /><br />As we mentioned though, unless you are in the browser plugin or application server development business chances are you hadn't heard about any of this until the beginning of this year. Just when everyone thought that Java applets were for all intents and purposes dead, they made somewhat of a comeback as a malware distribution medium. A <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-0422">series</a> of <a href="http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2013-2423">vulnerabilities</a> were discovered in the Oracle Java implementation that allow applets to escape the sandbox they run in and reset the security manager, effectively granting themselves full privileges. The exploits used to achieve this employ techniques ranging from reflection recursion to direct memory&nbsp;manipulation to bypass runtime security checks. Oracle has responded by releasing a series of patches, changing the default applet execution policy and introducing more visible warnings to let users know that potentially harmful code is being executed. Naturally, <a href="http://immunityproducts.blogspot.com.ar/2013/02/keep-calm-and-run-this-applet.html">different</a> <a href="http://immunityproducts.blogspot.jp/2013/04/yet-another-java-security-warning-bypass.html">ways</a> to bypass this are being discovered to <a href="http://malware.dontneedcoffee.com/2013/04/cve-2013-2423-integrating-exploit-kits.html">catch up</a>.<br /><br />In short, Java has had full-featured code access control for some time, even though the most widely used implementation appears to be lacking in enforcing it. But let's (finally!) get back to Android now. As the Java code access control mechanism can use code signer identity to define code sources and grant permissions, and Android code is required to be signed, one might expect that our favourite mobile OS would be making use of the Java's security model in some form, just as it does with JAR files. As it turns out, this is not the case. Access control <a href="http://developer.android.com/reference/java/lang/SecurityManager.html">related</a> <a href="http://developer.android.com/reference/java/security/Permission.html">classes</a> are part of the Java API, and are indeed available in Android. However, looking at the&nbsp;implementation reveals that they are practically&nbsp;empty, with just enough code to compile. In addition, they feature a prominent <i>'Legacy security code; do not use.'</i> notice. So why bother reviewing all of the above then? Even though Android's access control model is very different from the legacy Java one, it does borrow some of the same ideas, and a comparison is helpful when discussing the design&nbsp;decisions&nbsp;made.<br /><h3>Android security architecture basics</h3></div><div>Before we discuss the role of code signing in Android's security model, let's say a few words about Android's general security architecture. As we know, Android is Linux-based and relies heavily on traditional UNIX features to implement its security&nbsp;architecture. Each application runs in a separate process with a distinct identity (user ID, UID). By default apps cannot modify each other's resources and this is enforced by Linux which doesn't allow different processes to access memory or files they don't own (unless access is explicitly granted by the owner, a.k.a <a href="http://en.wikipedia.org/wiki/Discretionary_access_control">discretionary access control</a>). Additionally, each app (UID) is granted a set of logical permissions at install time, and cannot perform operations (call APIs) that require permissions it doesn't have. This is the biggest difference compared to the 'standard' Java permission model: code from different sources running in a single process cannot have different permissions, since permissions are granted at the UID level. Most permissions cannot be dynamically granted after the package has been installed, however as of 4.2 a number of 'development' permissions (e.g., <a href="http://developer.android.com/reference/android/Manifest.permission.html#READ_LOGS"><code>READ_LOGS</code></a>, <a href="http://developer.android.com/reference/android/Manifest.permission.html#WRITE_SECURE_SETTINGS"><code>WRITE_SECURE_SETTINGS</code></a>) have been introduced that can be granted or revoked on demand using the <code>pm grant/revoke</code> command (or matching system APIs). The system will show a confirmation dialog showing permissions requested by an app before installing. With the exception of the new 'development' permissions, all requested permissions are permanently granted if the the user allows the install. For a certain messaging app it looks like this in Jelly Bean:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-CvMQe-ONelk/UYKCM3Y4TrI/AAAAAAAANTM/-Lbfu2sms2Q/s1600/permission-screen.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-CvMQe-ONelk/UYKCM3Y4TrI/AAAAAAAANTM/-Lbfu2sms2Q/s400/permission-screen.png" height="640" width="384" /></a></div><br /><br />Android permissions are typically implemented by mapping them to Linux groups that have the&nbsp;necessary read/write access to relevant system resources (files or sockets) and thus are ultimately enforced by the Linux kernel. Some permissions are enforced by system daemons or services by explicitly checking if the calling UID is whitelisted to perform a particular operation. The network access permission (<a href="http://developer.android.com/reference/android/Manifest.permission.html#INTERNET"><code>INTERNET</code></a>) is somewhat of a hybrid: it is mapped to a group (<code>inet</code>), but since network access is not associated with one particular socket, the kernel checks whether processes trying to open a socket are members of the <code>inet</code> group on each related system call (known as 'paranoid network security').<br /><br />Each permission has an associated '<a href="http://developer.android.com/guide/topics/manifest/permission-element.html#plevel">protection level</a>' that indicates how the system proceeds when deciding whether to grant or deny the permission. The two levels most relevant to our discussion are <code>signature</code> and <code>signatureOrSystem</code>. The former is granted only to apps signed with the same certificate as the package declaring the permission, while the latter is granted to apps that are in the Android system image, even if the signer is different.<br /><br />Besides the built-in permissions, custom permissions can also be defined by declaring them in the app manifest file. Those can be enforced statically by the system or dynamically by app components. Permissions attached to components (activities, services, broadcast receivers or content providers) defined in <code>AndroidManifest.xml</code> are automatically enforced by the system. Components can also make use of framework APIs to check whether the calling UID has been granted a required permissions on a case-by-case basis (e.g., only for write operations, etc.). We will introduce other permission related details as necessary later, but you can refer to this <a href="http://marakana.com/">Marakana</a> <a href="https://marakana.com/s/post/1393/slides.htm">presentation</a> for a more complete and thorough discussion of Android permissions (and more). Of course, <a href="http://developer.android.com/guide/topics/security/permissions.html">some</a> <a href="http://source.android.com/tech/security/#android-application-security">official</a> <a href="https://developer.android.com/training/articles/security-tips.html#Permissions">documentation</a> is also available.<br /><h3>The role of code signing</h3></div><div>As we saw in the <a href="http://nelenkov.blogspot.com/2013/04/android-code-signing.html">previous article</a>, Android code signing is based on Java JAR signing. Consequently, it uses public key cryptography and X.509 certificates as do a lot of other code signing schemes. However, this is where the similarities end. In practically all other platforms that use code signing (for example <a href="https://en.wikipedia.org/wiki/Java_Platform,_Micro_Edition">Java ME</a>), code signing certificate needs to be issued by a CA that the platform trusts. While there is no lack of CAs that issue code signing certificates, in reality it is quite difficult to obtain a certificate that will be trusted by all targeted devices. Android solves this problem quite simply: it doesn't care about the actual signing certificate. Thus you do not need to have it issued by a CA (although you could, and most will happily take your money), and virtually all code signing certificates used in Android are self-signed. Additionally, you don't need to assert your identity in any way: you can use pretty much anything as the subject name (the Google Play store does have a few checks to weed out some common names, but not the OS itself). Signing certificates are treated as binary blobs by Android, and the fact that they are in X.509 format is merely a consequence of using the JAR format. Android doesn't validate certificates as such: if the certificate is not self-signed, the signing CA's certificate does not have to be present, yet alone trusted; it will also happily install apps with an expired signing certificate. If you are coming from a traditional PKI background, this may sound like heresy, but try to keep an open mind and note that Android does <i>not</i> make use of PKI for code signing.<br /><br />So what are code signing certificates used for then? Two things: making sure updates for an app are coming from the same author (same origin policy), and establishing trust relationships between applications. Both are implemented by comparing the signing certificate of the currently installed target app with the certificate of the update or related application. Comparison <a href="https://github.com/android/platform_frameworks_base/blob/jb-mr1-release/core/java/android/content/pm/Signature.java#L162">boils down</a> to calling&nbsp;<code><a href="http://developer.android.com/reference/java/util/Arrays.html#equals(byte[], byte[])">Arrays.equals()</a></code> on the binary (<a href="http://en.wikipedia.org/wiki/Distinguished_Encoding_Rules#DER_encoding">DER</a>) representation of both certificates. This method naturally knows nothing about CAs or expiration dates. One consequence of this is that once an app (identified by a unique package name) is installed, updates need to use the <i>exact same</i> signing certificates (with one exception, see next section). While multiple signatures on Android apps are not common, if the original application was signed by more than one signer, any updates need to be signed by the same signers, each using its original signing certificate. This means that if your signing certificate(s) expires, you cannot update your app and need to release a new one instead. This would result in not only losing any existing user base or ratings, but more importantly losing access to the legacy app's data and settings (again, there are some exceptions). The solution to this problem is quite simple: don't let your certificate expire. The currently recommended validity period is at least 25 years, and the Google Play Store requires validity until at least October 2033 (Y2K33?). While technically this only amounts to putting off the problem, proper certificate migration support might eventually be added to the platform. Unfortunately, this means that if your signing key is lost or compromised, you are currently out of luck.<br /><br />Let's examine the major uses of code signing in Android in detail.<br /><h3>Application authenticity and identity</h3></div><div>In Android all apps are managed by the system <code>PacakgeManagerService</code>, no matter if they are pre-installed, downloaded from an app market or side loaded. It keeps a database of currently installed apps, including their signing certificate(s), granted permissions and additional metadata in the <code>/data/system/packages.xml</code> file. A typical entry for a user-installed app might look like this:<br /><br /><pre>&lt;package codepath="/data/app/com.chrome.beta-2.apk" <br /> flags="572996" ft="13e20480558" <br /> installer="com.android.vending" <br /> it="13ca981cbe3" name="com.chrome.beta" <br /> nativelibrarypath="/data/app-lib/com.chrome.beta-2" <br /> userid="10092" ut="13e204816ce" version="1453060"&gt;<br />&lt;sigs count="1"&gt;<br />&lt;cert index="8"&gt;<br />&lt;/cert&gt;<br />&lt;/sigs&gt;<br />&lt;perms&gt;<br />&lt;item name="android.permission.NFC"/&gt;<br />...<br />&lt;item name="com.android.browser.permission.READ_HISTORY_BOOKMARKS"/&gt;<br />&lt;/perms&gt;<br />&lt;/package&gt;<br /></pre><br />As you can see above, a package entry specifies the package name, the location of the APK and associated libraries, assigned UID and some additional install metadata such as install and update time. This is followed by the number of signatures and the signing certificate as a hexadecimal string. Since a hex-encoded certificate will usually take up around 2K, the actual certificate contents is listed only once. All subsequent packages signed with the same certificate only refer to it by index, as is the case above. The <code>PackageManagerService</code>&nbsp;uses the <code>&lt;cert/&gt;</code> values in <code>packages.xml</code> to decide whether an update is signed with the same certificate as the original app. The certificate is followed by the list of permissions the package has been granted. All of this information is cached on memory (keyed by package name) at runtime for performance reasons.<br /><br />Just like user-installed apps, pre-installed apps (usually found in <code>/system/app</code>) can be updated without a full-blown system update, usually via the Play Store or a similar app distribution service. As the <code>/system</code> partition is mounted read-only though, updates are installed in <code>/data</code>, while the original app remains as is. In addition to a&nbsp;<code>&lt;package/&gt;</code> entry, such an app will also have a <code>&lt;updated-package&gt;</code> entry that might look like this:<br /><br /><pre>&lt;updated-package name="com.google.android.youtube" <br /> codePath="/system/app/YouTube.apk" <br /> ft="13cd6667b50" it="13ae93df638" ut="13cd6667b50" <br /> version="4216" <br /> nativeLibraryPath="/data/app-lib/com.google.android.youtube-1" <br /> userId="10067"&gt;<br />&lt;perms&gt;<br />&lt;item name="android.permission.NFC" /&gt;<br />...<br />&lt;/perms&gt;<br />&lt;/updated-package&gt;<br /></pre><br />The update (in <code>/data/app</code>) inherits the original app's permissions and UID. System apps receive another special treatment as well: if an updated APK is installed over the original one (in <code>/system/app</code>) it is allowed to be signed with a different certificate. The rationale behind this is that if the installer has enough privileges to write to <code>/system</code>, it can be trusted to change the signing certificate as well. The UID, and any files and permissions are retained. Again, there is an exception though: if the package is part of a shared user (discussed in the next section), the signature cannot be updated, because that would affect other apps as well. In the reverse case, when a new system app signed by a different certificate than that of the currently installed non-system app (with the same package name), the non-system app will be deleted first.<br /><br />Speaking of system apps, most of those are signed by a number of so called 'platform keys'. There are four different keys in the current AOSP tree, named <code>platform</code>, <code>shared</code>, <code>media</code> and <code>testkey</code> (<code>releasekey</code> for release builds). All packages considered part of the core platform (System UI, Settings, Phone,&nbsp;Bluetooth&nbsp;etc.) are signed with the <code>platform</code> key, launcher and contacts related packages -- with the <code>shared</code> key, the gallery app and media related providers -- with the <code>media</code> key, and everything else (including packages that don't explicitly specify the signing key) -- with the <code>testkey</code>. One thing to note is that the keys distributed with AOSP are in no way special, even though they have 'Google' in the certificate DN. Using them to sign your apps will not give you any specific privileges, you will need the actual keys Google or your carrier/device&nbsp;manufacturer uses. Even though the associated certificates may happen to have the same DN as the ones in AOSP, they are different and very unlikely to be publicly accessible. Custom ROMs are often an exception though, and some, including <a href="http://www.cyanogenmod.org/">CyanogenMod</a>, use the AOSP keys, or publicly available keys, as is (there are plans to change this for CyanogenMod though). Sharing the signing key allows packages to work together and establish trust relationships, which we will discuss next.<br /><h3>Inter-application trust relationships</h3><h4>Signature permissions</h4></div><div>As we mentioned above, Android permissions (system or custom) can be declared with the <code>signature</code> protection level. With this level, the permission is only granted if the requesting app is signed by the same signer as the package declaring the permission. This can be thought of as a limited form of mandatory access control (<a href="http://en.wikipedia.org/wiki/Mandatory_access_control">MAC</a>). For custom (app-declared) permission, permissions are declared in the package's <code>AndroidManifest.xml</code> file, and are added to the system when it is installed. Just as other package data, permissions are saved in the <code>/data/system/packages.xml</code> file, as children of the <code>&lt;permissions/&gt;</code> element. Here's how the declaration of a custom permission used by some Google apps looks like: <br /><br /><pre>&lt;permissions&gt;<br />..<br />&lt;item name="com.google.android.googleapps.permission.ACCESS_GOOGLE_PASSWORD" <br /> package="com.google.android.gsf.login" <br /> protection="2" /&gt;<br />...<br />&lt;/permissions&gt;<br /></pre><br />The entry has the permission name, declaring package and protection level (2 corresponds to <code>signature</code>) as attributes. When installing a package that requests this permission, the <code>PackageManagerService</code> will perform binary comparison (just as when upgrading packages) of its signing certificate against the certificate of the Google Login Service (the declaring package,&nbsp;<code>com.google.android.gsf.login</code>) in order to decide whether to grant the permission. A noteworthy detail is that the system cannot grant a permission it doesn't know about. That is, if app A declares permission 'foo' and app B uses it, app B needs to be installed after app A, otherwise you will get a warning at install time and the permission won't be granted. Since app installation order typically cannot be guaranteed, the usual workaround for this situation is to declare the permission in both apps. Permissions can also be added and removed dynamically using the <a href="http://developer.android.com/reference/android/content/pm/PackageManager.html#addPermission(android.content.pm.PermissionInfo)"><code>PackageManger.addPermission()</code></a> API (know as 'dynamic permissions'). However, packages can only add permissions to a <a href="http://developer.android.com/reference/android/R.styleable.html#AndroidManifestPermissionTree">permission tree</a> they define (i.e., you cannot add permissions to another app).<br /><br />That mostly explains custom permissions, but what about built-in, system permissions with <code>signature</code> protection level? They work exactly as custom permissions, except that the package that defines them is special. They are defined in the <code>android</code> package, sometimes also referred as 'the framework' or 'the platform'. The core android framework is the set of classes shared by system services, some of them exposed via the public SDK. Those are packaged in JAR files found in <code>/system/framework</code>. Interestingly, those JAR files are not signed: while Android borrows the JAR format to implement code signing, only APK files are signed, not actual JARs. The only APK file in the framework directory is <code>framework-res.apk</code>. As the name implies, it packages framework resources (animation, drawables, layouts, etc.), but no actual code. Most importantly, it defines the <code>android</code> package and system permissions. Thus any app trying to request a system-level signature permission needs to be signed with the same certificate as the framework resource package. Not surprisingly, it is signed by the <code>platform</code> key discussed in the previous section (usually found in <code>build/target/product/security/platform.pk8|.x509.pem</code>). The associated certificate may looks something like this for an AOSP build:<br /><br /><pre>Version: 3 (0x2)<br />Serial Number: 12941516320735154170 (0xb3998086d056cffa)<br />Signature Algorithm: md5WithRSAEncryption<br />Issuer: C=US, ST=California, L=Mountain View, O=Android, OU=Android, <br />CN=Android/emailAddress=android@android.com<br />Validity<br /> Not Before: Apr 15 22:40:50 2008 GMT<br /> Not After : Sep 1 22:40:50 2035 GMT<br />Subject: C=US, ST=California, L=Mountain View, O=Android, OU=Android, <br />CN=Android/emailAddress=android@android.com<br /></pre><h4>Shared user ID</h4></div><div>Android provides an even stronger inter-app trust relationship than using signature permissions: &nbsp;the ability for different apps to run as the same UID, and optionally in the same process. It is usually referred to as '<a href="http://developer.android.com/guide/topics/manifest/manifest-element.html#uid">shared user ID</a>'. This feature is extensively used by core framework services and system applications, and while the Android team does not recommend that third-party application use it, it is available to user applications as well. It is enabled by adding the <code>android:sharedUserId</code> attribute to <code>AndroidManifest.xml</code>'s root element. The 'user ID' specified in the manifest needs to be in Java package format (containing at least one '.') and is used as an identifier, much like package names for applications. If the specified shared UID does not exist it is simply created, but if another package with the same shared UID is already installed, the signing certificate is compared to that of the existing package, and if they do not match, a <code>INSTALL_FAILED_SHARED_USER_INCOMPATIBLE</code> error is returned and installation fails. Adding the <code>sharedUserId</code> to the new version of an already installed app will cause it to change its UID, which would result in losing access to its own files (that was <a href="https://code.google.com/p/android/issues/detail?id=3763">the case</a> in some previous Android versions). Therefore, this is disallowed by the system, and it will reject the update with the <code>INSTALL_FAILED_UID_CHANGED</code> error. In short, if you plan to use shared UID for your apps, you have to design for it from the start, and have them use it since the very first release.<br /><br />A shared UID is a first class object in the system's <code>packages.xml</code> and is treated much like apps are: it has associated signing certificate(s) and permissions. Android has 5 built-in shared UIDs, automatically added when the system is bootstrapped:<br /><ul><li><code>android.uid.system</code> (<a href="http://developer.android.com/reference/android/os/Process.html#SYSTEM_UID"><code>SYSTEM_UID</code></a>, 1000)</li><li><code>android.uid.phone</code> (<a href="http://developer.android.com/reference/android/os/Process.html#PHONE_UID"><code>PHONE_UID</code></a>, 1001)</li><li><code>android.uid.bluetooth</code> (<code>BLUETOOH_UID</code>, 1002)</li><li><code>android.uid.log</code> (<code>LOG_UID</code>, 1007) </li><li><code>android.uid.nfc</code> (<code>NFC_UID</code>, 1027)</li></ul><br />Here's how the <code>system</code> shared UID is defined:<br /><br /><pre>&lt;shared-user name="android.uid.system" userId="1000"&gt;<br />&lt;sigs count="1"&gt;<br />&lt;cert index="4" /&gt;<br />&lt;/sigs&gt;<br />&lt;perms&gt;<br />&lt;item name="android.permission.MASTER_CLEAR" /&gt;<br />&lt;item name="android.permission.CLEAR_APP_USER_DATA" /&gt;<br />&lt;item name="android.permission.MODIFY_NETWORK_ACCOUNTING" /&gt;<br />...<br />&lt;shared-user/&gt;<br /></pre><br />As you can see, apart from having a bunch of scary permissions (about 60 on a 4.2 device), the declaration is very similar to the <code>package</code> declarations we showed previously. Conversely, packages that are a part of a shared UID, do not have an associated granted permission list. They inherit the permissions of the shared UID, which are a union of the permissions requested by all currently installed packages with the same shared UID. A side effect of this is, that if a package is part of a shared UID, it can access APIs it hasn't explicitly requested permissions for, as long as some package with the same shared UID has already requested them. Permissions are dynamically removed from the <code>&lt;shared-user/&gt;</code> declaration as packages are installed or uninstalled though, so the set of available permissions is neither guaranteed nor constant. Here's how the declaration of a system app (KeyChain) that runs under a shared ID looks like. It references the shared UID with the <code>sharedUserId</code> attribute and lacks explicit permission declarations:<br /><br /><pre>&lt;package name="com.android.keychain" <br /> codePath="/system/app/KeyChain.apk" <br /> nativeLibraryPath="/data/app-lib/KeyChain" <br /> flags="540229" ft="13cd65721a0" <br /> it="13c2d4721f0" ut="13cd65721a0" <br /> version="17" <br /> sharedUserId="1000"&gt;<br />&lt;sigs count="1"&gt;<br />&lt;cert index="4" /&gt;<br />&lt;/sigs&gt;<br />&lt;/package&gt;<br /></pre><br />The shared UID is not just a package management construct, it actually maps to a shared Linux UID at runtime as well. Here is an example of two system apps running under the <code>system</code> UID:<br /><br /><pre>system 5901 9852 845708 40972 ffffffff 00000000 S com.android.settings<br />system 6201 9852 824756 22256 ffffffff 00000000 S com.android.keychain<br /></pre><br />The ultimate trust level on Android is, of course, running in the same process. Since apps that are part of the same shared UID already have the same Linux UID and can access the same system resources, this is not a problem. It can be requested by specifying the same process name in the <a href="http://developer.android.com/guide/topics/manifest/application-element.html#proc"><code>process</code></a> attribute of the <code>&lt;application/&gt;</code> element in the manifest for all apps that need to run in one process. While the obvious result of this is that the apps can share memory and communicate directly instead of using RPC, some system services allow special access to components running in the same process (for example direct access to cached passwords or getting authentication tokens without showing UI prompts). Google apps take advantage of this by requesting to run in the same process as the login service in order to be able to sync data in the background, without user interaction (e.g., Play Services and the Google location service). Naturally, they are signed withe same certificate and part of the <code>com.google.uid.shared</code> shared UID.<br /><h3>Summary</h3></div><div>Android uses the Java JAR format for code signing, and signatures can be added to both application packages (APKs) and system update packages (OTA updates). While JAR signing is based on X.509 certificates and PKI, Android does not use or validate the signer certificates as such. They are treated as binary blobs and an exact byte match is required in order for the system to consider two packages signed by the same signer. Package signature matching is at the heart of the Android security model, used both to guarantee that package updates come from the same origin and when establishing inter-application trust relationships. Inter-app trust relationships can be created either using signature-level permissions (built-in or custom), or by allowing apps to share the same system UID and, optionally, process.&nbsp;</div>Nikolay Elenkovhttps://plus.google.com/117221066931981967754noreply@blogger.com0tag:blogger.com,1999:blog-2873091912851440312.post-2528445854285777672013-04-24T01:51:00.000+09:002013-04-24T01:51:11.119+09:00Android code signingWe covered a <a href="http://nelenkov.blogspot.jp/2013/02/secure-usb-debugging-in-android-422.html">new security feature</a> introduced in the last Jelly Bean maintenance release in our last post and, before you know it, a&nbsp;<a href="https://android.googlesource.com/platform/build/+/android-4.2.2_r1.2">new tag</a>&nbsp;has already&nbsp;popped up in AOSP. <a href="https://developers.google.com/events/io/">Google I/O</a> is just around the corner, and some interesting bits and pieces are trickling into the AOSP <a href="https://android.googlesource.com/platform/build/+log/master">master branch</a>, so it's probably time for a new post. There are plenty of places where you can get your rumour fix regarding I/O 2013 and it looks like build&nbsp;JDQ39E is going to be <a href="http://www.androidpolice.com/2013/04/17/google-pushes-new-android-4-2-2-code-to-aosp-jdq39e-4-2-2_r1-2-here-is-the-developer-changelog/">somewhat boring</a>, so we will explore something different instead: code signing. This particular aspect of Android has remained virtually unchanged since the first public release, and is so central to the platform, that is pretty much taken for granted. While neither Java code signing, nor its Android implementation are particularly new, some of the finer details are not particularly well-known, so we'll try to shed some more light on those. The first post of the series will concentrate on the signature formats used while the next one will look into how code signing fits into Android's security model.<br /><h3>Java code signing</h3><div>As we all know, Android applications are coded (mostly) in Java, and Android application package files (APKs) are just weird-looking JARs, so it pays to understand how JAR signing works first.&nbsp;</div><div><br /></div><div>First off, a few words about code signing in general. Why would anyone want to sign code? For the usual reasons: integrity and authenticity. Basically, before executing any third-party program you want to make sure that it hasn't been tampered with (integrity) and that it was actually created by the entity that it claims to come from (authenticity). Those features are usually implemented by some digital signature scheme, which guarantees that only the entity owning the signing key can produce a valid code signature. The signature verification process verifies both that the code has not been tampered with and that the signature was produced with the expected key. One problem that code signing doesn't solve directly is whether the code signer (software publisher) can be trusted. The usual way trust is handled is by requiring the code signer to hold a digital certificate, which they attach to the signed code. Verifiers decide whether to trust the certificate either based on some trust model (e.g., PKI or web of trust), or on a case-by-case basis. Another problem that code signing does not solve (or event attempt to) is whether the signed code is safe to run. As we have seen, code that has been signed (or appears to be) by a trusted third party is not necessarily safe (e.g., <a href="http://blogs.technet.com/b/srd/archive/2012/06/06/more-information-about-the-digital-certificates-used-to-sign-the-flame-malware.aspx">Flame</a>&nbsp;or <a href="http://blogs.adobe.com/asset/2012/09/inappropriate-use-of-adobe-code-signing-certificate.html">pwdump7</a>).<br /><br />Java's native code packaging format is the <a href="http://docs.oracle.com/javase/7/docs/technotes/guides/jar/jar.html">JAR</a> file, which is essentially a ZIP file bundling together code (<code>.class</code> files or <code>classes.dex</code> in Android), some metadata about the package (<code>.MF</code> manifest files in the META-INF/ directory) and, optionally, resources the code uses. The main manifest file (<code>MANIFEST.MF</code>) has entries with the file name and digest value of each file in the archive. The start of the manifest file of a typical APK file is show below (we'll use APKs instead of actual JARs for all examples). <br /><br /><pre>Manifest-Version: 1.0<br />Created-By: 1.0 (Android)<br /><br />Name: res/drawable-xhdpi/ic_launcher.png<br />SHA1-Digest: K/0Rd/lt0qSlgDD/9DY7aCNlBvU=<br /><br />Name: res/menu/main.xml<br />SHA1-Digest: kG8WDil9ur0f+F2AxgcSSKDhjn0=<br /><br />Name: ...<br /></pre><br />Java code signing is implemented at the JAR file level by adding another manifest file, called a signature file (<code>.SF</code>) which contains the data to be signed, and a digital signature over it (called a 'signature block file',&nbsp;<code>.RSA,</code>&nbsp;<code>.DSA</code>&nbsp;or&nbsp;<code>.EC</code>).&nbsp;The signature file is very similar to the manifest, and contains the digest of the whole manifest file (<code>SHA1-Digest-Manifest</code>), as well as digests for each of the individual entries in <code>MANIFEST.MF</code>.<br /><br /><pre>Signature-Version: 1.0<br />SHA1-Digest-Manifest-Main-Attributes: ZKXxNW/3Rg7JA1r0+RlbJIP6IMA=<br />Created-By: 1.6.0_45 (Sun Microsystems Inc.)<br />SHA1-Digest-Manifest: zb0XjEhVBxE0z2ZC+B4OW25WBxo=<br /><br />Name: res/drawable-xhdpi/ic_launcher.png<br />SHA1-Digest: jTeE2Y5L3uBdQ2g40PB2n72L3dE=<br /><br />Name: res/menu/main.xml<br />SHA1-Digest: kSQDLtTE07cLhTH/cY54UjbbNBo=<br /><br />Name: ...<br /></pre><br />The digests in the signature file can easily be verified by using the following OpenSSL commands:<br /><br /><pre>$ openssl sha1 -binary MANIFEST.MF |openssl base64<br />zb0XjEhVBxE0z2ZC+B4OW25WBxo=<br />$ echo -en "Name: res/drawable-xhdpi/ic_launcher.png\r\nSHA1-Digest: \<br />K/0Rd/lt0qSlgDD/9DY7aCNlBvU=\r\n\r\n"|openssl sha1 -binary |openssl base64<br />jTeE2Y5L3uBdQ2g40PB2n72L3dE=<br /></pre><br />The first one takes the SHA1 digest of the entire manifest file and encodes it to Base 64 to produce the <code>SHA1-Digest-Manifest</code> value, and the second one simulates how the digest of a single manifest entry is being calculated. The actual digital signature is in binary <a href="http://www.rsa.com/rsalabs/node.asp?id=2129">PKCS#7</a> (or more generally, <a href="http://tools.ietf.org/html/rfc5652">CMS</a>) format and includes the signature value and signing certificate. Signature block files produced using the RSA algorithm are saved with the extension <code>.RSA</code>, those generated with DSA or EC keys with the <code>.DSA</code> or <code>.EC</code> extensions, respectively. Multiple signatures can be performed, resulting in multiple <code>.SF</code> and <code>.RSA/DSA/EC</code> files in the JAR file's <code>META-INF/</code> directory. The CMS format is rather involved, allowing not only for signing, but for encryption as well, both with different algorithms and parameters, and is extensible via custom signed or unsigned attributes. A thorough discussion is beyond the scope of this post, but as used for JAR signing it basically contains the digest algorithm, signing certificate and signature value. Optionally the signed data can be included in the <code>SignedData</code> CMS structure (attached signature), but JAR signatures don't include it (detached signature). Here's how an RSA signature block file looks like when parsed into ASN.1 (certificate info trimmed):<br /><br /><pre>$ openssl asn1parse -i -inform DER -in CERT.RSA<br /> 0:d=0 hl=4 l= 888 cons: SEQUENCE <br /> 4:d=1 hl=2 l= 9 prim: OBJECT :pkcs7-signedData<br /> 15:d=1 hl=4 l= 873 cons: cont [ 0 ] <br /> 19:d=2 hl=4 l= 869 cons: SEQUENCE <br /> 23:d=3 hl=2 l= 1 prim: INTEGER :01<br /> 26:d=3 hl=2 l= 11 cons: SET <br /> 28:d=4 hl=2 l= 9 cons: SEQUENCE <br /> 30:d=5 hl=2 l= 5 prim: OBJECT :sha1<br /> 37:d=5 hl=2 l= 0 prim: NULL <br /> 39:d=3 hl=2 l= 11 cons: SEQUENCE <br /> 41:d=4 hl=2 l= 9 prim: OBJECT :pkcs7-data<br /> 52:d=3 hl=4 l= 607 cons: cont [ 0 ] <br /> 56:d=4 hl=4 l= 603 cons: SEQUENCE <br /> 60:d=5 hl=4 l= 452 cons: SEQUENCE <br /> 64:d=6 hl=2 l= 3 cons: cont [ 0 ] <br /> 66:d=7 hl=2 l= 1 prim: INTEGER :02<br /> 69:d=6 hl=2 l= 1 prim: INTEGER :04<br /> 72:d=6 hl=2 l= 13 cons: SEQUENCE <br /> 74:d=7 hl=2 l= 9 prim: OBJECT :sha1WithRSAEncryption<br /> 85:d=7 hl=2 l= 0 prim: NULL <br /> 87:d=6 hl=2 l= 56 cons: SEQUENCE <br /> 89:d=7 hl=2 l= 11 cons: SET <br /> 91:d=8 hl=2 l= 9 cons: SEQUENCE <br /> 93:d=9 hl=2 l= 3 prim: OBJECT :countryName<br /> 98:d=9 hl=2 l= 2 prim: PRINTABLESTRING :JP<br />...<br /> 735:d=5 hl=2 l= 9 cons: SEQUENCE <br /> 737:d=6 hl=2 l= 5 prim: OBJECT :sha1<br /> 744:d=6 hl=2 l= 0 prim: NULL <br /> 746:d=5 hl=2 l= 13 cons: SEQUENCE <br /> 748:d=6 hl=2 l= 9 prim: OBJECT :rsaEncryption<br /> 759:d=6 hl=2 l= 0 prim: NULL <br /> 761:d=5 hl=3 l= 128 prim: OCTET STRING [HEX DUMP]:892744D30DCEDF74933007...</pre><br />If we extract the contents of a JAR file, we can use the OpenSSL <code>smime</code>&nbsp;(CMS is the basis of <a href="http://en.wikipedia.org/wiki/S/MIME">S/MIME</a>) command to verify its signature by specifying the signature file as the content (signed data). It will print the signed data and the verification result:<br /><br /><pre>$ openssl smime -verify -in CERT.RSA -inform DER -content CERT.SF signing-cert.pem<br />Signature-Version: 1.0<br />SHA1-Digest-Manifest-Main-Attributes: ZKXxNW/3Rg7JA1r0+RlbJIP6IMA=<br />Created-By: 1.6.0_43 (Sun Microsystems Inc.)<br />SHA1-Digest-Manifest: zb0XjEhVBxE0z2ZC+B4OW25WBxo=<br /><br />Name: res/drawable-xhdpi/ic_launcher.png<br />SHA1-Digest: jTeE2Y5L3uBdQ2g40PB2n72L3dE=<br /><br />...<br />Verification successful<br /></pre><br />The official tools for JAR signing and verification are the <a href="http://docs.oracle.com/javase/7/docs/technotes/tools/windows/jarsigner.html"><code>jarsigner</code></a> and <a href="http://docs.oracle.com/javase/7/docs/technotes/tools/windows/keytool.html"><code>keytool</code></a> commands from the JDK. Since Java 5.0 <code>jarsigner</code>&nbsp;also&nbsp;<a href="http://docs.oracle.com/javase/1.5.0/docs/guide/security/time-of-signing.html">supports timestamping</a> the signature by a <a href="http://tools.ietf.org/html/rfc3161#section-2">TSA</a>, which could be quite useful when you need to ascertain the time of signing (e.g., before or after the signing certificate expired), but this feature is not widely used. Using the <code>jarsigner</code> command, a JAR file is signed by specifying a keystore file, the alias of the key to use for signing (used as the base name for the signature block file) and, optionally, a signature algorithm. One thing to note is that since Java 7, the default algorithm has changed to <code>SHA256withRSA</code>, so you need to explicitly specify it if you want to use SHA1. Verification is performed in a similar fashion, but the keystore file is used to search for trusted certificates, if specified. (again using an APK file instead of an actual JAR):<br /><br /><pre>$ jarsigner -keystore debug.keystore -sigalg SHA1withRSA test.apk androiddebugkey<br />$ jarsigner -keystore debug.keystore -verify -verbose -certs test.apk<br />....<br /><br />smk 965 Mon Apr 08 23:55:34 JST 2013 res/drawable-xxhdpi/ic_launcher.png<br /><br /> X.509, CN=Android Debug, O=Android, C=US (androiddebugkey)<br /> [certificate is valid from 6/18/11 7:31 PM to 6/10/41 7:31 PM]<br /><br />smk 458072 Tue Apr 09 01:16:18 JST 2013 classes.dex<br /><br /> X.509, CN=Android Debug, O=Android, C=US (androiddebugkey)<br /> [certificate is valid from 6/18/11 7:31 PM to 6/10/41 7:31 PM]<br /><br /> 903 Tue Apr 09 01:16:18 JST 2013 META-INF/MANIFEST.MF<br /> 956 Tue Apr 09 01:16:18 JST 2013 META-INF/CERT.SF<br /> 776 Tue Apr 09 01:16:18 JST 2013 META-INF/CERT.RSA<br /><br /> s = signature was verified<br /> m = entry is listed in manifest<br /> k = at least one certificate was found in keystore<br /> i = at least one certificate was found in identity scope<br /><br />jar verified.<br /></pre><br />The last command verifies the signature block and signing certificate, ensuring that the signature file has not been tampered with. It then verifies that each digest in the signature file (<code>CERT.SF</code>) matches its corresponding section in the manifest file (<code>MANIFEST.MF</code>). One thing to note is that the number of entries in the signature file does not necessarily have to match those in the manifest file. Files can be added to a signed JAR without invalidating its signature: as long as none of the original files have been changed, verification succeeds. Finally,&nbsp;<code>jarsigner</code> reads each manifest entry and checks that the file digest matches the actual file contents. Optionally, it checks whether the signing certificate is present in the specified key store (if any). As of Java 7 there is a new <code>-strict</code> option that will perform additional certificate validations. Validation errors are treated as warnings and reflected in the exit code of the <code>jarsigner</code> command. As you can see, it prints certificate details for each entry, even though they are the same for all entries. A slightly better way to view signer info when using Java 7 is to specify the <code>-verbose:summary</code> or <code>-verbose:grouped</code>, or alternatively use the <code>keytool</code> command: <br /><br /><pre>$ keytool -list -printcert -jarfile test.apk<br />Signer #1:<br /><br />Signature:<br /><br />Owner: CN=Android Debug, O=Android, C=US<br />Issuer: CN=Android Debug, O=Android, C=US<br />Serial number: 4dfc7e9a<br />Valid from: Sat Jun 18 19:31:54 JST 2011 until: Mon Jun 10 19:31:54 JST 2041<br />Certificate fingerprints:<br /> MD5: E8:93:6E:43:99:61:C8:37:E1:30:36:14:CF:71:C2:32<br /> SHA1: 08:53:74:41:50:26:07:E7:8F:A5:5F:56:4B:11:62:52:06:54:83:BE<br /> Signature algorithm name: SHA1withRSA<br /> Version: 3<br /></pre><br />Once you know the signature block file name (by listing the archive contents, for example), you can also use OpenSSL in combination with the <code>zip</code> command to easily extract the signing certificate to a file:<br /><br /><pre>$ unzip -q -c test.apk META-INF/CERT.RSA|openssl pkcs7 -inform DER -print_certs -out cert.pem<br /></pre></div><br /><h3>Android code signing</h3>As evident from the examples above, Android code signing is based on Java JAR signing and you can use the regular JDK tools to sign or verify APKs. Besides those, there is an Android specific tool in the AOSP <code>build/</code> directory, aptly named&nbsp;<code>signapk</code>. It performs pretty much the same task as&nbsp;<code>jarsigner</code> in signing mode, but there are also a few notable differences. To start with, while <code>jarsigner</code> requires keys to be stored in a compatible key store file, <code>signapk</code> takes separate signing key (in <a href="http://www.rsa.com/rsalabs/node.asp?id=2130">PKCS#8</a> format) and certificate (in DER format) files as input. While it does appear to have some support for reading DSA keys, it can only produce signatures with the <code>SHA1withRSA</code> mechanism. Raw private keys in PKCS#8 are somewhat hard to come by, but you can easily generate a test key pair and a self-signed certificate using the <code>make_key</code> found in <code>development/tools</code>. If you have existing OpenSSL keys you cannot use them as is however, you will need to convert them using OpenSSL's <code>pkcs8</code> command:<br /><br /><pre>echo "keypwd"|openssl pkcs8 -in mykey.pem -topk8 -outform DER -out mykey.pk8 -passout stdin<br /></pre><br />Once you have the needed keys, you can sign an APK like this:<br /><br /><pre>$ java -jar signapk.jar cert.cer key.pk8 test.apk test-signed.apk<br /></pre><br />Nothing new so far, except the somewhat exotic (but easily parsable by JCE classes) key format. However, the <code>signapk</code> has an extra 'sign whole file' mode, enabled with the <code>-w</code> option. When in this mode, in addition to signing each individual JAR entry, the tool generates a signature over the whole archive as well. This mode is not supported by <code>jarsigner</code>&nbsp;and is specific to Android. So why sign the whole archive when each of the individual files is already signed? In order to support over the air updates (OTA), naturally :). If you have ever flashed a custom ROM, or been impatient and updated your device manually before it picked up the official update broadcast, you know that OTA packages are ZIP files containing the updated files and scripts to apply them. It turns out, however, that they a lot more like JAR files on the inside. They come with a <code>META-INF/</code> directory, manifests and a signature block, plus a few other extras. One of those is the <code>/META-INF/com/android/otacert</code> file, which contains the update signing certificate (in PEM format). Before booting into recovery to actually apply the update, Android will verify the package signature, then check that the signing certificate is one that is trusted to sign updates. OTA trusted certificates are completely separate from the 'regular' system <a href="http://nelenkov.blogspot.jp/2011/12/ics-trust-store-implementation.html">trust store</a>, and reside in a, you guessed it, a ZIP file, usually stored as <code>/system/etc/security/otacerts.zip</code>. On a production device it will typically contain a single file, likely named <code>releasekey.x509.pem</code>.<br /><br />Going back to the original question, if OTA files are JAR files, and JAR files don't support whole-file signatures, where does the signature go? The Android&nbsp;<code>signapk</code> tool slightly abuses the ZIP format by adding a null-terminated string comment in the ZIP comment section, followed by the binary signature block and a 6-byte final record, containing the signature offset and the size of the entire comment section. This makes it easy to verify the package by first reading and verifying the signature block from the end of the file, and only reading the rest of the file (which for a major upgrade might be in the hundreds of MBs) if the signature checks out. If you want to manually verify the package signature with OpenSSL, you can separate the signed data and the signature block with a script like the one below, where the second argument is the signature block file, and the third one is the signed ZIP file (without the comments section) to write:<br /><br /><pre>#!/bin/env python<br /><br />import os<br />import sys<br />import struct<br /><br />file_name = sys.argv[1]<br />file_size = os.stat(file_name).st_size<br /><br />f = open(file_name, 'rb')<br />f.seek(file_size - 6)<br />footer = f.read(6)<br /><br />sig_offset = struct.unpack('&lt;H', footer[0:2])<br />sig_start = file_size - sig_offset[0]<br />sig_size = sig_offset[0] - 6<br />f.seek(sig_start)<br />sig = f.read(sig_size)<br /><br />f.seek(0)<br /># 2 bytes comment length + 18 bytes string comment<br />sd = f.read(file_size - sig_offset[0] - 2 - 18)<br />f.close()<br /><br />sf = open(sys.argv[2], 'wb')<br />sf.write(sig)<br />sf.close()<br /><br />zf = open(sys.argv[3], 'wb')<br />zf.write(sd)<br />zf.close()<br /></pre><h3>Summary</h3><div>Android relies heavily on the Java JAR format, both for application packages (APKs) and for system updates (OTA packages). APK signing uses a subset of the JAR signing specification as is, while OTA packages use a custom format that generates a signature over the whole file. Standalone package verification can be performed with standard JDK tools or OpenSSL (after some preprocessing). The Android OS and recovery system follow the same verification procedures before installing APKs or applying system updates. In the next article we will explore how the OS uses package signatures and how they fit into Android's security model.&nbsp;</div>Nikolay Elenkovhttps://plus.google.com/117221066931981967754noreply@blogger.com13tag:blogger.com,1999:blog-2873091912851440312.post-85437458525203513342013-02-20T01:15:00.001+09:002013-02-22T11:00:13.786+09:00Secure USB debugging in Android 4.2.2It seems we somehow managed to let two months slip by without a single post. Time to get back on track, and the recently unveiled Android maintenance release provides a nice opportunity to jump start things. Official release notes for Android 4.2.2 don't seem to be available at this time, but it made its way into <a href="http://source.android.com/">AOSP</a> quite promptly, so you can easily compile your own changelog based on git log messages. Or, you can simply check the now traditional one over at <a href="http://aosp.changelog.to/JDQ39">Funky Android</a>. As you can see, there are quite a few changes, and if you want a higher level overview your time would probably be better spent reading some of the related <a href="http://www.androidpolice.com/tags/jdq39/">posts</a> by the usual <a href="http://www.androidpolice.com/">suspects</a>. Deviating from our usually somewhat obscure topics, we will focus on a new security feature that is quite visible and has received a fair bit of attention already. It was even <a href="http://android-developers.blogspot.jp/2013/02/security-enhancements-in-jelly-bean.html">introduced</a> on the official <a href="http://android-developers.blogspot.com/">Android Developers Blog</a>, fortunately for us only in brief. As usual, we like to dig a little deeper, so if you are interested in more details about the shiny new secure debugging feature, read on.<br /><h3>Why bother securing debugging?</h3><div>If you have done development in any programming environment, you know that 'debugging' is usually the exact opposite of 'secure'. Debugging typically involves inspecting (and sometimes even changing) internal program state, dumping encrypted communication data to log files, universal root access and other scary, but necessary activities. It is hard enough without having to bother with security, so why further complicate things by making developers jump through security hoops? As it turns out, Android debugging, as provided by the <a href="http://developer.android.com/tools/help/adb.html">Android Debug Bridge</a> (ADB), is quite versatile and gives you almost complete control over a device when enabled. This is, of course, is very welcome if you are developing or testing an application (or the OS itself), but can also be used for other purposes. Before we give an overview of those, here is a (non-exhaustive) list of things ADB lets you do:<br /><ul><li>debug apps running on the device (using <a href="http://docs.oracle.com/javase/1.5.0/docs/guide/jpda/jdwp-spec.html">JWDP</a>)</li><li>install and remove apps</li><li>copy files to and from the device</li><li>execute shell commands on the device</li><li>get the system and apps logs</li></ul>If debugging is enabled on a device, you can do all of the above and more simply by connecting the device to a computer with an USB cable. If you think that's not much of a problem because the device is locked, here's some bad news: you don't have to unlock the device in order to execute ADB commands. And it gets worse -- if the device is rooted (as are many developer devices), you can access and change every single file, including system files and password databases. Of course, that is not the end of it: you don't actually need a computer with development tools in order to do this: another Android device and an <a href="http://en.wikipedia.org/wiki/USB_OTG">OTG</a> USB cable are sufficient. Security researchers, most notably Kyle Osborn, have build <a href="https://github.com/kosborn/p2p-adb">tools</a> (there's even a <a href="https://github.com/x942/p2pgui">GUI</a>) that automate this and try very hard to extract as much data as possible from the device in a very short time. As we mentioned, if the device is rooted all bets are off -- it is trivial to lift all of your credentials, disable or crack the device lock and even log into your Google account(s). But even without root, anything on external storage (SD card) is accessible (for example your precious photos), as are your contacts and text messages. See Kyle's presentations for details and other attack vectors.<br /><br />By now you should be at least concerned about leaving ADB access wide open, so let's see what are some ways to secure it.<br /><h3>Securing ADB</h3></div><div>Despite some innovative attacks, none of the above is particularly new, but it has remained mostly unaddressed, probably because debugging is a developer feature regular users don't even know about. There have been some third-party solutions though, so let's briefly review those before introducing the one implemented in the core OS. Two of the more popular apps that allow you to control USB debugging are <a href="https://play.google.com/store/apps/details?id=com.ramdroid.adbtoggle">ADB Toggle</a> and&nbsp;<a href="https://play.google.com/store/apps/details?id=com.stericson.adbSecure">AdbdSecure</a>. They automatically disable ADB debugging when the device is locked or unplugged, and enable it again when you unlock it or plug in the USB cable. This is generally sufficient protection, but has one major drawback -- starting and stopping the <code>adbd</code> daemon requires root access. If you want to develop and test apps on a device with stock firmware, you still have to disable debugging manually. Root access typically goes hand-in-hand with running custom firmware -- you usually need root access to flash a new ROM version (or at least it makes it much easier) and some of the apps shipping with those ROMs take advantage of root access to give you extra features not available in the stock OS (full backup, tethering, firewalls, etc.). As a result of this, custom ROMs have traditionally shipped with root access enabled (typically in the form of a SUID <code>su</code> binary and an accompanying 'Superuser' app). Thus, once you installed your favourite custom ROM you were automatically 'rooted'. <a href="http://www.cyanogenmod.org/">CyanogenMod</a> (which has over a million users and growing) changed this almost a year ago by disabling root access in their ROMs and giving you the option to enable it for apps only, for ADB of for both. This is not a bad compromise -- you can both run root apps and have ADB enabled without exposing your device too much, and it can be used in combination with an app that automates toggling ADB for even more control. Of course, these solutions don't apply to the majority of Android users -- those running stock OS versions.<br /><br />The first step in making ADB access harder to reach was taken in Android 4.2 which hid the 'Developer options' settings screen, requiring you to use a <a href="http://developer.android.com/tools/device.html#setting-up">secret knock</a> in order to enable it. While this is mildly annoying for developers, it makes sure that most users cannot enable ADB access by accident. This is, of course, only a stop-gap measure, and once you manage to turn USB debugging on, your device is once again vulnerable. A proper solution was introduced in the 4.2.2 maintenance release with the so called 'secure USB debugging' (it was actually commited almost a year ago, but for some reason didn't make it into the original JB release). 'Secure' here refers to the fact that only hosts explicitly authorized by the user can now connect to the <code>adbd</code> daemon on the device and execute debugging commands. Thus if someone tries to connect a device to another one via USB in order to access ADB, they need to first unlock the target device and authorize access from the debug host by clicking 'OK' in the confirmation dialog shown below. You can make your decision persistent by checking the 'Always allow from this computer' and debugging will work just as before, as long as you are on the same machine. One thing to note is that on tablets with multi-user support the confirmation dialog is only shown to the primary (administrator) user, so you will need to switch to it in order to enable debugging. Naturally this 'secure debugging' is only effective if you have a reasonably secure lock screen password in place, but everyone has on of those, right? That's pretty much all you need to know in order to secure your developer device, but if you are interested in how all of this is implemented under the hood, proceed to the next sections. We will first a give a very brief overview of the ADB architecture and then show how it has been extended in order to support authenticated debugging.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-kYKWJ_9TjEo/USN-vQqpUPI/AAAAAAAAL3M/xozI-9JuLDM/s1600/adb-debug-confirmation.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="373" src="http://4.bp.blogspot.com/-kYKWJ_9TjEo/USN-vQqpUPI/AAAAAAAAL3M/xozI-9JuLDM/s400/adb-debug-confirmation.png" width="400" /></a></div><br /><h3>ADB overview</h3>The Android Debug Bridge serves two main purposes: it keeps track of all devices (or emulators) connected to a host, and it offers various services to its clients (command line clients, IDEs, etc.). It consists of three main components: the ADB server, the ADB daemon (<code>adbd</code>) and the default command line client (<code>adb</code>). The ADB server runs on the host machine as a background process and decouples clients from the actual devices or emulators. It monitors device connectivity and sets their state appropriately (<code>CONNECTED</code>, <code>OFFLINE</code>, <code>RECOVERY</code>, etc.). The ADB daemon runs on an Android device (or emulator) and provides the actual services client use. It connects to the ADB server through USB or TCP/IP, and receives and process commands from it. Finally, <code>adb</code> is the command line client that lets you send commands to a particular device. In practice it is implemented in the same binary as the ADB server and thus shares much of its code.<br /><br />The client talks to the local ADB server via TCP (typically via <code>localhost:5037</code>) using text based commands, and receives <code>OK</code> or <code>FAIL</code> responses in return. Some commands, like enumerating devices, port forwarding or daemon restart are handled by the local daemon, and some (e.g., shell or log access) naturally require a connection to the target Android device. Device access is generally accomplished by forwarding input and output streams to/from the host. The transport layer that implements this uses simple messages with a 24 byte header and an optional payload to exchange commands and responses. We will not go into further details about those, but will only note the newly added authentication commands in the next section. For more details refer to the protocol description in <code>system/core/adb/protocol.txt</code> and this <a href="http://www.slideshare.net/tetsu.koba/adbandroid-debug-bridge-how-it-works">presentation </a>which features quite a few helpful diagrams and examples.<br /><h3>Secure ADB implementation</h3>The ADB host authentication functionality is enabled by default when the <code>ro.adb.secure</code> system property is set to 1, and there is no way to disable it via the system settings interface (which is a good thing). The device is initially in the <code>OFFLINE</code> state and only goes into the <code>ONLINE</code> state once the host has authenticated. As you may already know, hosts use RSA keys in order to authenticate to the ADB daemon on the device. Authentication is typically a three step process:<br /><ol><li>After a host tries to connect, the device sends and <code>AUTH</code> message of type <code>TOKEN</code> that includes a 20 byte random value (read from <code>/dev/urandom</code>).</li><li>The host responds with a <code>SIGNATURE</code> packet that includes a SHA1withRSA signature of the random token with one of its private keys.</li><li>The device tries to verify the received signature, and if signature verification succeeds, it responds with a <code>CONNECT</code> message and goes into the <code>ONLINE</code> state. If verification fails, either because the signature value doesn't match or because there is no corresponding public key to verify with, the device sends another <code>AUTH TOKEN</code> with a new random value, so that the host can try authenticating again (slowing down if the number of failures goes over a certain threshold).</li></ol>Signature verification typically fails the first time you connect the device to a new host because it doesn't yet have the host key. In that case the host sends its public key in an <code>AUTH RSAPUBLICKEY</code> message. The device takes the MD5 hash of that key and displays it in the 'Allow USB debugging' confirmation dialog. Since <code>adbd</code> is a native daemon, the key needs to be passed to the main Android OS. This is accomplished by simply writing the key to a local socket (aptly named, 'adbd'). When you enable ADB debugging from the developer settings screen, a thread that listens to the 'adbd' socket is started. When it receives a message starting with <code>"PK"</code> it treats it as a public key, parses it, calculates the MD5 hash and displays the confirmation dialog (an activity actually, part of the <code>SystemUI</code> package). If you tap 'OK', it sends a simple simple <code>"OK"</code> response and <code>adbd</code> uses the key to verify the authentication message (otherwise it just stays offline). In case you check the 'Always allow from this computer' checkbox, the public key is written to disk and automatically used for signature verification the next time you connect to the same host. The allow/deny debugging functionality, along with starting/stopping the <code>adbd</code> daemon, is exposed as public methods of the <code>UsbDeviceManager</code> system service.</div><div><br /></div><div>We've described the ADB authentication protocol in some detail, but haven't said much about the actual keys used in the process. Those are 2048-bit RSA keys and are generated by the local ADB server. They are typically stored in <code>$HOME/.android</code> as <code>adbkey</code> and <code>adbkey.pub</code>. On Windows that usually translates to <code>%USERPOFILE%\.android</code>, but keys might end up in <code>C:\Windows\System32\config\systemprofile\.android</code> in some cases (see issue&nbsp;<a href="http://code.google.com/p/android/issues/detail?id=49465">49465</a>). The default key directory can be overridden by setting the <code>ANDROID_SDK_HOME</code> environment variable. If the <code>ADB_VENDOR_KEYS</code> environment variable is set, the directory it points to is also searched for keys. If no keys are found in any of the above locations, a new key pair is generated and saved. On the device, keys are stored in the <code>/data/misc/adb/adb_keys</code> file, and new authorized keys are appended to the same file as you accept them. Read-only 'vendor keys' are stored in the <code>/adb_keys</code> file, but it doesn't seem to exist on current Nexus devices. The private key is in standard OpenSSL PEM format, while the public one consists of the Base 64 encoded key followed by a `user@host` user identifier, separated by space. The user identifier doesn't seem to be used at the moment and is only meaningful on Unix-based OS'es, on Windows it is always 'unknown@unknown'.&nbsp;</div><div><br />While the USB debugging confirmation dialog helpfully displays a key fingerprint to let you verify you are connected to the expected host, the <code>adb</code> client doesn't have a handy command to print the fingerprint of the host key. You might think that there is little room for confusion: after all there is only one cable plugged to a single machine, but if you are running a couple of VMs, thing can get a little fuzzy. Here's one of way of displaying the host key's fingerprint in the same format the confirmation dialog uses (run in <code>$HOME/.android</code> or specify the full path to the public key file):<br /><br /><pre>awk '{print $1}' &lt; adbkey.pub|openssl base64 -A -d -a \<br />|openssl md5 -c|awk '{print $2}'|tr '[:lower:]' '[:upper:]'<br /></pre><br />We've reviewed how secure ADB debugging is implemented and have shown why it is needed, but just to show that all of this solves a real problem, we'll finish off with a screenshot of what a failed ADB attack against an 4.2.2 device from another Android device looks like:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-1qhk4Ck5Nvs/USObO6T_69I/AAAAAAAAL3c/YIr50qWytpw/s1600/p2p-adb-offline.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="640" src="http://3.bp.blogspot.com/-1qhk4Ck5Nvs/USObO6T_69I/AAAAAAAAL3c/YIr50qWytpw/s640/p2p-adb-offline.png" width="360" /></a></div><br /><h3>Summary</h3></div><div>Android 4.2.2 finally adds a means to control &nbsp;USB access to the ADB daemon by requiring debug hosts to be&nbsp;explicitly&nbsp;authorized by the user and added to a whitelist. This helps prevent information extraction via USB which requires only brief physical access and has been demonstrated to be quite effective. While secure debugging is not a feature most users will ever use directly, along with full disk encryption and a good screen lock password, it goes a long way towards making developer devices more secure.&nbsp;</div>Nikolay Elenkovhttps://plus.google.com/117221066931981967754noreply@blogger.com23tag:blogger.com,1999:blog-2873091912851440312.post-52586516951511008422012-12-13T01:42:00.001+09:002013-05-07T13:03:53.445+09:00Certificate pinning in Android 4.2A lot has happened in the Android world since our <a href="http://nelenkov.blogspot.com/2012/11/sso-using-account-manager.html">last post</a>, with <a href="http://googleblog.blogspot.com/2012/10/nexus-best-of-google-now-in-three-sizes.html">new devices</a> being announced and going on and off sale. &nbsp;Most importantly, however, Android 4.2 has been <a href="http://www.android.com/whatsnew/">released</a>&nbsp;and made its way to <a href="http://source.android.com/">AOSP</a>. It's an evolutionary upgrade, bringing various improvements and some new &nbsp;user and developer <a href="http://developer.android.com/about/versions/jelly-bean.html">features</a>. This time around, security related enhancements made it into the what's new &nbsp;list, and there is quite a lot of them. The most widely publicized one has been, as expected, the one users may actually see -- application verification. It recently got an <a href="http://www.cs.ncsu.edu/faculty/jiang/appverify/">in-depth analysis</a>, so in this post we will look into something less visible, but nevertheless quite important --&nbsp;certificate pinning<strong style="background-color: #f9f9f9; color: #222222; font-family: Roboto, sans-serif; font-size: 14px; line-height: 19px;">.&nbsp;</strong><br /><h3>PKI's trust problems and proposed solutions</h3><div>In the highly unlikely case that you haven't heard about it, the&nbsp;trustworthiness of the existing public CA model has been severely compromised in the recent couple of years. It has been suspect for a while, but recent high profile CA <a href="http://www.comodo.com/Comodo-Fraud-Incident-2011-03-23.html">security</a> <a href="http://www.f-secure.com/weblog/archives/00002228.html">breaches</a> have brought this problem into the spotlight. Attackers managed to issue certificates for a wide range of sites, including Windows Update servers and Gmail. Not all of those were used (or at least not detected) in real attacks, but the incidents showed just how much of current Internet technology depends on certificates. Fraudulent ones can be used for anything from installing malware to spying to Internet communication, and all that while fooling users that they are using a secure channel or installing a trusted executable. And better security for CA's is not really a solution: major CA's have <a href="https://www.eff.org/deeplinks/2011/04/unqualified-names-ssl-observatory">willingly issued</a> <i>hundreds </i>of certificated for unqualified names such as <code>localhost</code>, <code>webmail</code> and <code>exchange</code> (here is a <a href="http://www.prism.gatech.edu/~gmacon3/ssl-observatory/unqualified_local_rfc1918_all.txt">breakdown</a>, by number of issued certificates). These could enable&nbsp;eavesdropping&nbsp;on internal corporate traffic by using the certificates for a man-in-the-middle (MITM) attack against any internal host accessed using an unqualified name. And of course there is also the matter of <a href="http://files.cloudprivacy.net/ssl-mitm.pdf">compelled certificate creation</a>, where a government agency could compel a CA to issue a false certificate to be used for intercepting secure traffic (and all this may be perfectly legal).&nbsp;</div><div><br /></div><div>Clearly the current PKI system, which is largely based on a pre-selected set of trusted CA's (trust anchors), is problematic, but what are some of the actual problems? There are different takes on this one, but for starters, there are too many public CA's. As this <a href="https://www.eff.org/files/colour_map_of_CAs.pdf">map</a> by the <a href="https://www.eff.org/">EFF</a>'s <a href="https://www.eff.org/observatory">SSL Observatory</a> project shows, there are more than public 650 CA's trusted by major browsers. Recent Android versions ship with over one hundred (140 for 4.2) trusted CA certificates and&nbsp;<a href="http://nelenkov.blogspot.jp/2011/12/ics-trust-store-implementation.html">until ICS</a> the only way to remote a trusted certificate was a vendor-initiated OS OTA. Additionally, there is generally no technical restriction to what certificates CA's can issue: as the Comodo and DigiNotar attack have shown, anyone can issue a certificate for <code>*.google.com</code> (<a href="http://s.ietf.org/html/rfc5280#section-4.2.1.10">name constraints</a> don't apply to root CA's and don't really work for a public CA). Furthermore, since CA's don't&nbsp;publicize&nbsp;what certificates they have issued, there is no way for site operators (in this case Google) to know when someone issues a new, possibly fraudulent, certificate for one of their sites and take appropriate action (<a href="http://www.imperialviolet.org/2012/11/06/certtrans.html">certificate transparency</a> standards aims to address this). In short, with the current system if any of the built-in trust anchors is compromised, an attacker could issue a certificate for any site, and neither users accessing it, nor the owner of the site would notice. So what are some of the proposed solutions?&nbsp;</div><div><br /></div><div>Proposed solutions range from radical: scrape the whole PKI idea altogether and replace it with something new and better (DNSSEC is a usual favourite); and moderate: use the current&nbsp;infrastructure&nbsp; but do not implicitly trust CA's; to evolutionary: maintain compatibility with the current system, but&nbsp;extend it in ways that limit the damage of CA compromise. DNSSEC is still not universally deployed, although the key TLD domains have already been signed. Additionally, it is inherently hierarchical and actually more rigid than PKI, so it doesn't really fit the bill too well. Other even remotely viable solutions have yet to emerge, so we can safely say that the radical path is currently out of the picture. Moving towards the moderate side, some people suggest the SSH model, in which no sites or CA's are initially trusted, and users decide what site to trust on first access. Unlike SSH however, the number of sites that you access directly or indirectly (via CDN's, embedded content, etc.) is virtually unlimited, and user-managed trust is quite unrealistic. Of a similar vein, but much more practical is&nbsp;<a href="https://twitter.com/moxie">Moxie Marlinspike</a>'s (of <a href="http://www.thoughtcrime.org/software/sslstrip/">sslstri</a>p and&nbsp;<a href="https://www.cloudcracker.com/">CloudCracker</a> fame)&nbsp;<a href="http://convergence.io/">Convergence</a>. It is based on the idea of <i>trust agility</i>, a concept he introduced in his&nbsp;<a href="http://www.youtube.com/watch?v=Z7Wl2FW2TcA">SSL And The Future Of Authenticity</a>&nbsp;talk (and related&nbsp;<a href="http://www.thoughtcrime.org/blog/ssl-and-the-future-of-authenticity/">blog post</a>). It both abolishes the browser (or OS) pre-selected trust anchor set, and recognizes that users cannot possibly independently make trust decisions about all the sites they visit. Trust decisions are delegated to a set of notaries, that can vouch for a site by basically confirming that the certificate you receive from a site is one they have seen before. If multiple notaries point out the same certificate as correct, users can be reasonably sure that it is genuine and therefore trustworthy. Convergence is not a formal standard, but was released as actual <a href="https://github.com/moxie0/Convergence">working code</a> including a Firefox plugin (client) and server-side notary software. While this system is promising, the number of available notaries is currently <a href="https://github.com/moxie0/Convergence/wiki/Notaries">limited</a>, and Google has <a href="http://www.imperialviolet.org/2011/09/07/convergence.html">publicly stated</a> that it won't add it to Chrome, and it cannot currently be implemented as an extension either (Chrome lacks the necessary API's to let plugins override the default certificate validation module).</div><div><br /></div><div>That leads us to the current evolutionary solutions, which have been deployed to a fairly large user base, mostly courtesy of the Chrome browser. One is certificate blacklisting, which is more of a band-aid solution: in addition to removing compromised CA certificates from the trust anchor set with a browser update, it also explicitly refuses to trust their public keys in order to cover the case where they are manually added to the trust store again. Chrome <a href="http://src.chromium.org/viewvc/chrome/trunk/src/net/base/x509_certificate.cc?view=markup&amp;pathrev=78478">added</a> blacklisting around the time Comodo was compromised, and Android has this feature since the original <a href="http://nelenkov.blogspot.com/2012/07/certificate-blacklisting-in-jelly-bean.html">Jelly Bean release</a> (4.1). The next one, certificate pinning (more accurately public key pinning), takes the converse approach: it whitelists the keys that are trusted to sign certificates for a particular site. Let's look at it in a bit more detail.</div><h3>Certificate pinning</h3><div>Pinning was <a href="http://www.imperialviolet.org/2011/05/04/pinning.html">introduced</a> in Google Chrome 13 in order to limit the CA's that can issue certificates for Google properties. It actually helped discover the <a href="http://googleonlinesecurity.blogspot.jp/2011/08/update-on-attempted-man-in-middle.html">MITM attack</a> against Gmail, which resulted from the DigiNotar breach. It is implemented by maintaining a list of public keys that are trusted to issue certificates for a particular DNS name. The list is consulted when validating the certificate chain for a host, and if the chain doesn't include at least one of the whitelisted keys, validation fails. In practice the browser keeps a list of SHA1 hashes of the <code>SubjectPublicKeyInfo</code> (SPKI) field of trusted certificates. Pinning the public keys instead of the actual certificates allows for updating host certificates without breaking validation and requiring pinning information update. You can find the current Chrome list <a href="http://src.chromium.org/viewvc/chrome/trunk/src/net/base/transport_security_state_static.h">here</a>.<br /><br />As you can see, the list now pins non-Google sites as well, such as <code>twitter.com</code> and <code>lookout.com</code>, and is rather large. Including more sites will only make it larger, and it is quite obvious that hard-coding pins doesn't really scale. A couple of new Internet standards have been proposed to help solve this scalability problem: <a href="http://tools.ietf.org/html/draft-ietf-websec-key-pinning-04">Public Key Pinning Extension for HTTP</a> (PKPE) by Google and <a href="http://tack.io/draft.html">Trust Assertions for Certificate Keys</a> (TACK) by Moxie Marlinspike. The first one is simpler and proposes a new HTTP header (<code>Public-Key-Pin</code>, PKP) that holds pinning information including public key hashes, pin lifetime and whether to apply pinning to subdomains of the current host. Pinning information (or simply 'pins') is cached by the browser and used when making trust decisions until it expires. Pins are required to be delivered over a secure (TLS) connection, and the first connection that includes a PKP header is implicitly trusted (or optionally validated against pins built into the client). The protocol also supports an endpoint to report failed validations to via the <code>report-uri</code> directive and allows for a non-enforcing mode (specified with the <code>Public-Key-Pins-Report-Only</code> header), where validation failures are reported, but connections are still allowed. This makes it possible to notify host administrators about possible MITM attacks against their sites, so that they can take appropriate action. The TACK proposal, on the other header, is somewhat more complex and defines a new TLS extension (TACK) that carries pinning information signed with a dedicated 'TACK key'. TLS connections to a pinned hostname require the server to present a 'tack' containing the pinned key and a corresponding signature over the TLS server's public key. Thus both pinning information exchange and validation are carried out at the TLS layer. In contrast, PKPE uses the HTTP layer (over TLS) to send pinning information to clients, but also requires validation to be performed at the TLS layer, dropping the connection if validation against the pins fails. Now that we have an idea how pinning works, let's see how it's implemented on Android.<br /><h3>Certificate pinning in Android</h3></div><div>As mentioned at beginning of the post, pinning is one of the many security enhancements introduced in Android 4.2. The OS doesn't come with any built-in pins, but instead reads them from a file in the <code>/data/misc/keychain directory</code> (where user-added <a href="http://nelenkov.blogspot.com/2011/12/ics-trust-store-implementation.html">certificates</a> and <a href="http://nelenkov.blogspot.com/2012/07/certificate-blacklisting-in-jelly-bean.html">blacklists</a> are stored). The file is called, you guessed it, simply <code>pins</code> and is in the following format: <code>hostname=enforcing|SPKI SHA512 hash, SPKI SHA512 hash,...</code>. Here <code>enforcing</code> is either <code>true</code> or <code>false</code> and is followed by a list of SPKI hashes (SHA512) separated by commas. Note that there is no validity period, so pins are valid until deleted. The file is used not only by the browser, but system-wide by virtue of pinning being integrated in libcore. In practice this means that the default (and only) system <code>X509TrustManager</code> implementation (<code>TrustManagerImpl</code>) consults the pin list when validating certificate chains. However there is a twist: the standard <code>checkServerTrusted()</code> method doesn't consult the pin list. Thus any legacy libraries that do not know about certificate pinning would continue to function exactly as before, regardless of the contents of the pin list. This has probably been done for compatibility reasons, and is something to be aware of: running on 4.2 doesn't necessarily mean that you get the benefit of system-level certificate pins. The pinning functionality is exposed to third party libraries or SDK apps via the new <a href="http://developer.android.com/reference/android/net/http/X509TrustManagerExtensions.html"><code>X509TrustManagerExtensions</code></a> SDK class. It has a single method, <code>List&lt;X509Certificate&gt; checkServerTrusted(X509Certificate[] chain, String authType, String host)</code> that returns a validated chain on success or throws a <code>CertificateException</code> if validation fails. Note the last parameter, <code>host</code>. This is what the underlying implementation (<code>TrustManagerImpl</code>) uses to search the pin list for matching pins. If one is found, the public keys in the chain being validated will be checked against the hashes in the pin entry for that host. If none of them matches, validation will fail and you will get a <code>CertificateException</code>. So what part of the system uses the new pinning functionality then? The default SSL engine (JSSE provider), namely the client handshake (ClientHandshakeImpl) and SSL socket (OpenSSLSocketImpl) implementations. They would check their underlying&nbsp;<code>X509TrustManager</code>&nbsp;and if it supports pinning, they will perform additional validation against the pin list. If validation fails, the connection won't be established, thus implementing pin validation on the TLS layer as required by the standards discussed in the previous section. We now know what the pin list is and who uses it, so let's find out how it is created and maintained.<br /><br />First off, at the time of this writing, Google-managed (on Nexus devices) JB 4.2 installations have an empty pin list (i.e., the <code>pins</code> file doesn't exist). Thus certificate pinning on Android has not been widely deployed yet. Eventually it will be, but the current state of affairs makes it easier to play with, because restoring to factory state requires simply deleting the <code>pins</code> file and associated metadata (root access required). As you might expect, the <code>pins</code> file is not written directly by the OS. Updating it is triggered by a broadcast (<code>android.intent.action.UPDATE_PINS</code>) that contains the new pins in it's extras. The extras contain the path to the new pins file, its new version (stored in <code>/data/misc/keychain/metadata/version</code>), a hash of the current pins and a <code>SHA512withRSA</code> signature over all the above. The receiver of the broadcast (<code>CertPinInstallReceiver</code>) will then verify the version, hash and signature, and if valid, atomically replace the current pins file with new content (the same procedure is used for updating the premium SMS numbers list). Signing the new pins ensures that they can only by updated by whoever controls the private signing key. The corresponding public key used for validation is stored as a system secure setting under the <code>"config_update_certificate"</code> key (usually in the <code>secure</code> table of the <br /><code>/data/data/com.android.providers.settings/databases/settings.db</code>) Just like the <code>pins</code> file, this value currently doesn't exists, so its relatively safe to install your own key in order to test how pinning works. Restoring to factory state requires deleting the corresponding row from the <code>secure</code> table. This basically covers the current pinning implementation in Android, it's now time to actually try it out.</div><h3>Using certificate pinning</h3><div>To begin with, if you are considering using pinning in an Android app, you don't need the latest and greatest OS version. If you are connecting to a server that uses a self-signed or a private CA-issued certificate, chances you might already be using pinning. Unlike a browser, your Android app doesn't need to connect to practically every possible host on the Internet, but only to a limited number of servers that you know and have control over (limited control in the case of hosted services). Thus you know in advance who issued your certificates and only need to trust their key(s) in order to establish a secure connection to your server(s). If you are initializing a <code>TrustManagerFactory</code> with your own keystore file that contains the issuing certificate(s) of your server's SSL certificate, you are already using pinning: since you don't trust any of the built-in trust anchors (CA certificates), if any of those got compromised your app won't be affected (unless it also talks to affected public servers as well). If you, for some reason, need to use the default trust anchors as well, you can define pins for your keys and validate them after the default system validation succeeds. For more thoughts on this and some sample code (doesn't support ICS and later, but there is pull request with the required changes), refer to <a href="http://www.thoughtcrime.org/blog/authenticity-is-broken-in-ssl-but-your-app-ha/">this post</a> by Moxie Marlinspike. <i>Update: </i> Moxie has repackaged his sample pinning code in an easy to use <a href="https://github.com/moxie0/AndroidPinning">standalone library</a>. <i>Update 2</i>: His version uses a static, app-specific trust tore. Here's a <a href="https://github.com/nelenkov/AndroidPinning">fork</a> that uses the system trust store, both on pre-ICS (<code>cacerts.bks</code>) and post-ICS (<a href="http://nelenkov.blogspot.jp/2011/12/ics-trust-store-implementation.html"><code>AndroidCAStore</code></a>) devices.<br /><br />Before we (finally!) start using pinning in 4.2 a word of warning: using the sample code presented below both requires root access and modifies core system files. It does have some limited safety checks, but it might break your system. If you decide to run it, make sure you have a <b>full </b>system backup and proceed with caution.<br /><br />As we have seen, pins are stored in a simple text file, so we can just write one up and place it in the required location. It will be picked and used by the system <code>TrustManager</code>, but that is not much fun and is not how the system actually works. We will go through the 'proper' channel instead by creating and sending a correctly signed update broadcast. To do this, we first need to create and install a signing key. The <a href="https://github.com/nelenkov/cert-pinner">sample app</a> has one embedded so you can just use that or generate and load a new one using OpenSSL (convert to PKCS#8 format to include in Java code). To install the key we need the <code>WRITE_SECURE_SETTINGS</code> permission, which is only granted to system apps, so we must either sign our test app with the platform key (on a self-built ROM) or copy it to <code>/system/app</code> (on a rooted phone with stock firmware). Once this is done we can install the key by updating the <code>"config_update_certificate"</code> secure setting:<br /><br /><pre>Settings.Secure.putString(ctx.getContentResolver(), "config_update_certificate", <br /> "MIICqDCCAZAC...");<br /></pre><br />If this is successful we then proceed to constructing our update request. This requires reading the current pin list version (from <code>/data/misc/keychain/metadata/version</code>) and the current pins file content. Initially both should be empty, so we can just start off with 0 and an empty string. We can then create our pins file, concatenate it with the above and sign the whole thing before sending the <code>UPDATE_PINS</code> broadcast. For updates, things are a bit more tricky since the <code>metadata/version</code> file's permissions don't allow for reading by a third party app. We work around this by launching a root shell to get the file contents with <code>cat</code>, so don't be alarmed if you get a 'Grant root?' popup by SuperSU or its brethren. Hashing and signing are pretty straightforward, but creating the new pins file merits some explanation.<br /><br />To make it easier to test, we create (or append to) the pins file by connecting to the URL specified in the app and pinning the public keys in the host's certificate chain (we'll use <code>www.google.com</code> in this example, but any host accessible over HTTPS should do). Note that we don't actually pin the host's SSL certificate: this is to allow for the case where the host key is lost or compromised and a new certificate is issued to the host. This is introduced in the PKPE draft as a necessary security trade-off to allow for host certificate updates. Also note that in the case of one (or more) intermediate CA certificates we pin both the issuing certificate's key(s) <b>and</b> the root certificate's key. This is to allow for testing more variations, but is not something you might want to do in practice: for a connection to be considered valid, only one of the keys in the pin entry needs to be in the host's certificate chain. In the case that this is the root certificate's key, connections to hosts with certificates issued by a compromised intermediary CA will be allowed (think hacked root CA reseller). And above all, getting and creating pins based on certificates you receive from a host on the Internet is obviously pointless if you are already the target of a MITM attack. For the purposes of this test, we assume that this is not the case. Once we have all the data, we fire the update intent, and if it checks out the pins file will be updated (watch the logcat output to confirm). The code for this will look something like this (largely based on pinning unit test code in AOSP). With that, it is time to test if pinning actually works. <br /><br /><pre>URL url = new URL("https://www.google.com");<br />HttpsURLConnection conn = (HttpsURLConnection) url.openConnection();<br />conn.setRequestMethod("GET");<br />conn.connect();<br /><br />X509Certificate[] chain = (X509Certificate[])conn.getServerCertificates();<br />X509Certificate cert = chain[1];<br />String pinEntry = String.format("%s=true|%s", url.getHost(), getFingerprint(cert));<br />String contentPath = makeTemporaryContentFile(pinEntry);<br />String version = getNextVersion("/data/misc/keychain/metadata/version");<br />String currentHash = getHash("/data/misc/keychain/pins");<br />String signature = createSignature(content, version, currentHash);<br /><br />Intent i = new Intent();<br />i.setAction("android.intent.action.UPDATE_PINS");<br />i.putExtra("CONTENT_PATH", contentPath);<br />i.putExtra("VERSION", version);<br />i.putExtra(REQUIRED_HASH", currentHash);<br />i.putExtra("SIGNATURE", signature);<br />sendBroadcast(i);<br /></pre></div><br /><div>We have now pinned <code>www.google.com</code>, but how to test if the connection will actually fail? There are multiple ways to do this, but to make things a bit more realistic we will launch a MITM attack of sorts by using an SSL proxy. We will use the <a href="http://portswigger.net/burp/proxy.html">Burp</a> proxy, which works by generating a new temporary (ephemeral) certificate on the fly for each host you connect to (if you prefer a terminal-based solution, try <a href="http://mitmproxy.org/">mitmproxy</a>). If you install Burp's root certificate in Android's trust store and are not using pinning, browsers and other HTTP clients have no way of distinguishing the ephemeral certificate Burp generates from the real one and will happily allow the connection. This allows Burp to decrypt the secure channel on the fly and enables you to view and manipulate traffic as you wish (strictly for research purposes, of course). Refer to the <a href="http://portswigger.net/burp/help/suite_gettingstarted.html">Getting Started</a> page for help with setting up Burp. Once we have Burp all set up, we need to configure Android to use it. While Android does support HTTP proxies, those are generally only used by the built-in browser and it is not guaranteed that HTTP libraries will use the proxy settings as well. Since Android is after all Linux, we can easily take care of this by setting up a 'transparent' proxy that redirects all HTTP traffic to our chosen host by using <code>iptables</code>. If you are not comfortable with <code>iptables</code> syntax or simply prefer an easy to use GUI, there's an app for that as well: <a href="https://play.google.com/store/apps/details?id=org.proxydroid">Proxy Droid</a>. After setting up Proxy Droid to forward packets to our Burp instance we should have all Android traffic flowing through our proxy. Open a couple of pages in the browser to confirm before proceeding further (make sure Burp's 'Intercept' button is off if traffic seems stuck).<br /><br />Finally time to connect! The <a href="https://github.com/nelenkov/cert-pinner">sample app</a> allows you to test connection with both of Android's HTTP libraries (<code>HttpURLConnection</code> and Apache's <code>HttpClient</code>), just press the corresponding 'Check w/ ...' button. Since validation is done at the TLS layer, the connection shouldn't be allowed and you should see something like this (the error message may say '<i>No peer certificates</i>' for <code>HttpClient</code>; this is due to the way it handles validation errors):<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-8pQhukAJAZo/UMb4Ar-P9xI/AAAAAAAAKPM/nzKav_XQIaY/s1600/huc-pinning-error.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://2.bp.blogspot.com/-8pQhukAJAZo/UMb4Ar-P9xI/AAAAAAAAKPM/nzKav_XQIaY/s400/huc-pinning-error.png" width="225" /></a></div><br /><br />If you instead see a message starting with '<i>X509TrustManagerExtensions verify result: Error verifying chain...</i>', the connection did go through but our additional validation using the <code>X509TrustManagerExtensions</code> class detected the changed certificate and failed. This shouldn't happen, right? It does though because HTTP clients cache connections (<code>SSLSocket</code> instances, which in turn each hold a <code>X509TrustManager</code> instance, which only reads pins when created). The easiest way to make sure pins are picked up is to reboot the phone after you pin your test host. If you try connecting with the Android browser after rebooting (not Chrome!), you will be greeted with this message:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-TdtyKlLKJLY/UMb5FWarAXI/AAAAAAAAKPU/88Bm9jDqmqQ/s1600/browser-pin-error.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://3.bp.blogspot.com/-TdtyKlLKJLY/UMb5FWarAXI/AAAAAAAAKPU/88Bm9jDqmqQ/s400/browser-pin-error.png" width="225" /></a></div><br />As you can see the certificate for <code>www.google.com</code> is issued by our Burp CA, but it might as well be from DigiNotar: if the proper public keys are pinned, Android should detected the fraudulent host certificate and show a warning. This works because the Android browser is using the system trust store and pins via the default <code>TrustManager</code>, even though it doesn't use JSSE SSL sockets. Connecting with Chrome on the other hand works fine even though it does have built-in pins for Google sites: Chrome allows manually installed trust anchors to override system pins so that tools such as Burp or Fiddler continue to work (or pinning is not yet enabled on Android, which is somewhat unlikely). <br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-QIRg8EbtGwQ/UMb7N7UqvEI/AAAAAAAAKPc/psfmNcJj4ss/s1600/chrome-connection.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://1.bp.blogspot.com/-QIRg8EbtGwQ/UMb7N7UqvEI/AAAAAAAAKPc/psfmNcJj4ss/s400/chrome-connection.png" width="225" /></a></div><br />So there you have it: pinning on Android works. If you look at the <a href="https://github.com/nelenkov/cert-pinner">sample code</a>, you will see that we have created enforcing pins and that is why we get connection errors when connecting through the proxy. If you set the enforcing parameter to <code>false</code> instead, connection will be allowed, but chains that failed validation will still be recorded to the system dropbox (<code>/data/system/dropbox</code>) in <code>cert_pin_failure@timestamp.txt</code> files, one for each validation failure.</div><div><br /><h3>Summary</h3>Android adds certificate pinning by keeping a pin list with an entry for each pinned DNS name. Pin entries include a host name, an enforcing parameter and a list of SPKI SHA512 hashes of the of keys that are allowed to sign a certificate for that host. The pin list is updated by sending a broadcast with signed update data. Applications using the default HTTP libraries get the benefit of system-level pinning automatically or can explicitly check a certificate chain against the pin list by using the <code>X509TrustManagerExtensions</code> SDK class. Currently the pin list is empty, but the functionality is available now and once pins for major sites are deployed this will add another layer of defense against MIMT attacks that follow after a CA has been compromised.<br /><br /></div>Nikolay Elenkovhttps://plus.google.com/117221066931981967754noreply@blogger.com11tag:blogger.com,1999:blog-2873091912851440312.post-63228723378091055552012-11-10T00:23:00.000+09:002012-12-03T13:26:15.421+09:00Single sign-on to Google sites using AccountManagerIn the <a href="http://nelenkov.blogspot.jp/2012/11/android-online-account-management.html">first part</a> of this series, we presented how the standard Android online account management framework works and explored how Google account authentication and authorization modules are implemented on Android. In this article we will see how to use the Google credentials stored on the device to log in to Google Web sites automatically. Note that this is different from using public Google API's, which generally only requires putting an authentication token (and possibly an API key) in a request header, and is quite well supported by the <a href="http://code.google.com/p/google-api-java-client">Google APIs Client Library</a>. First, some words on what motivated this whole exercise (may include some ranting, feel free to skip to the next section).<br /><h3>Android developer console API: DIY</h3><div>If you have ever published an application on the&nbsp;<strike>Android Market</strike>&nbsp;Google Play Store, you are familiar with the Android&nbsp;<a href="https://play.google.com/apps/publish/">developer console</a>. Besides letting you publish and update your apps, it also shows the number of total and active installs (notoriously broken and not too be taken too seriously, though it's been getting better lately), ratings and comments. Depending on how excited about the whole app publishing business you are, you might want to check it quite often to see how your app is doing, or maybe you just like hitting F5. Most people don't however, so pretty much every developer at some point comes up with the heretic idea that there must be a better way: you should be able to check your app's statistics on your Android device (obviously!), you should get notified about changes automatically and maybe even be able to easily see if today's numbers are better than yesterday's at a glance. Writing such a tool should be fairly easy, so you start looking for an API. If your search ends up empty it's not your search engine's fault: there is none! So before you start scraping those pretty Web pages with your favourite P-language, you check if someone has done this before -- you might get a few hits, and if you are lucky even find the&nbsp;<a href="https://play.google.com/store/apps/details?id=com.github.andlyticsproject">Android app</a>.<br /><br />Originally developed by Timelappse, and now <a href="https://github.com/AndlyticsProject">open source</a>, Andlytics does all the things mentioned above, and more (and if you need yet another feature, consider&nbsp;<a href="https://github.com/AndlyticsProject/andlytics/pulls">contributing</a>). So how does it manage to do all of this without an API? Through blood, sweat and a lot of protocol <strike>reversing</strike> guessing. You see, the current developer console is built on GWT which used to be Google's webstack-du-jour a few years back. GWT essentially consists of RPC endpoints at the server, called by a JavaScript client running in the browser. The serialization protocol in between is a custom one, and the specification is purposefully not publicly available (apparently, to allow for easier changes!?!). It has two main features: you need to know exactly how the transferred objects look like to be able to make any sense of it, and it was obviously designed by someone who used to write compilers for a living before they got into Web development ('string table' ring a bell?). Given the above, Andlytics was quite an accomplishment. Additionally, the developer console changing its protocol every other week and adding new features from time to time didn't really make it any easier to maintain. Eventually, the original developer had a bit too much GWT on his plate, and was kind enough to open source it, so others could share the pain.<br /><br />But there is a bright side to all this: <a href="http://android-developers.blogspot.jp/2012/10/new-google-play-developer-console.html">Developer Console v2</a>. It was announced at this year's Google I/O to much applause, but was only made universally available a couple of weeks ago (sound&nbsp;<a href="http://android-developers.blogspot.jp/2012/09/google-play-services-and-oauth-identity.html">familiar</a>?). It is a work in progress, but is showing promise. And the best part: it uses perfectly readable (if a bit heavy on <code>null</code>'s) JSON to transport data! Naturally, there was much rejoicing at the Andlytics Github project. It was unanimously decided that the sooner we obliterate all traces of GWT, the better, and the next version should use the v2 console 'API'. Deciphering the protocol didn't take long, but it turned out that while to log in to the v1 console all you needed was a ClientLogin (see the next section for an explanation) token straight out of Android's <code>AccountManger</code>, the new one was not so forgiving and the login flow was somewhat <a href="https://github.com/AndlyticsProject/andlytics/wiki/Developer-Console-v2---Login-Process">more complex</a>. Asking the user for their password and using it to login was obviously doable, but no one would like that, so we needed to figure out how to log in using the Google credentials already cached on the device. Android browser and Chrome are able to automatically log you in to the developer console without requiring your password, so it was clearly possible. The process is not really documented though, and that prompted this (maybe a bit too wide-cast) investigation. Which finally leads us to the topic of this post: to show how to use cached Google account credentials for single sign-on. Let's first see what standard ways are available to authenticate to Google's public services and API's.<br /><h3>Google services authentication and authorization</h3></div><div>The official place to start when selecting an auth mechanism is the&nbsp;<a href="http://google%20accounts%20authentication%20and%20authorization/">Google Accounts Authentication and Authorization</a> page. It lists quite a few protocols, some open and some proprietary. If you research further you will find that currently all but OAuth 2.0 and Open ID are considered deprecated, and using the proprietary ones is not recommended. However, a lot of services are still using older, proprietary protocols, so we will look into some of those as well. Most protocols also have two variations: one for Web applications and one for the so called, 'installed applications'. Web applications run in a browser, and are expected to be able to take advantage of all standard browser features: rich UI, free-form user interaction, cookie store and ability to follow redirects. Installed applications, on the other hand, don't have a native way to preserve session information, and may not have the full Web capabilities of a browser. Android native applications (mostly) fall in the 'installed applications' category, so let's see what protocols are available for them.<br /><h4>ClientLogin</h4>The oldest and most widely used till now authorization protocol for installed applications is <a href="https://developers.google.com/accounts/docs/AuthForInstalledApps">ClientLogin</a>. It assumes the application has access to the user's account name and password and lets you get an authorization token for a particular service, that can be saved and used for accessing that service on behalf of the user. Services are identified by proprietary service names, for example 'cl' for Google Calendar and 'ah' for Google App engine. A (non-exhaustive) list of supported service names can be found in the Google Data API <a href="https://developers.google.com/gdata/faq#clientlogin">reference</a>. Here are a few&nbsp;Android-specific ones, not listed in the reference: 'ac2dm',&nbsp;'android', 'androidsecure', 'androiddeveloper', 'androidmarket' and 'youngandroid' (probably for the discontinued <a href="http://code.google.com/p/app-inventor-for-android/">App Inventor</a>).&nbsp;The token can be fairly long-lived (up to two weeks), but cannot be refreshed and the application needs to obtain a new token when it expires. Additionally, there is no way to validate the token short of accessing the associated service: if you get an OK HTTP status (200), it is still valid, if 403 is returned you need to consult the additional error code and retry or get a new token. Another limitation is that ClientLogin tokens don't offer fine grained access to a service's resources: access is all or nothing, you cannot specify read-only access or access to a particular resource only. The biggest drawback for use in mobile apps though is that ClientLogin requires access to the actual user password. Therefore, if you don't want to force users to enter it each time a new token is required, it needs to be saved on the device, which poses various problems. As we saw in the <a href="http://nelenkov.blogspot.jp/2012/11/android-online-account-management.html">previous post</a>, in Android this is handled by GLS and the associated online service by storing an encrypted password or a master token on the device. Getting a token is as simple as calling the appropriate <code>AccountManger</code> method, which either returns a cached token or issues an API request to fetch a fresh one. Despite it's many limitations, the protocol is easy to understand and straightforward to implement, so it has been widely used. It has been <a href="http://googledevelopers.blogspot.jp/2012/04/changes-to-deprecation-policies-and-api.html">officially deprecated</a> since April 2012 though, and apps using it are encouraged to migrate to OAuth 2.0, but this hasn't quite happened yet.&nbsp;</div><div><h4>OAuth 2.0</h4></div><div>No one likes OAuth 1.0 (except <a href="https://dev.twitter.com/docs/auth/oauth">Twitter</a>) and <a href="https://developers.google.com/accounts/docs/AuthSub">AuthSub</a> is not quite suited for native applications, so we will only look at the currently recommended OAuth 2.0 protocol. OAuth 2.0 has been in the works for quite some time, but it only recently became an official Internet <a href="http://tools.ietf.org/html/rfc6749">standard</a>. It defines different authorization 'flows', aimed at different use cases, but we will not try to present all of them here. If you are unfamiliar with the protocol, refer to one of the&nbsp;multiple posts that aim to explain it at a higher level, or just read the <a href="http://tools.ietf.org/html/rfc6749">RFC</a>&nbsp;if you need the details. &nbsp;And, of course, you can watch for <a href="http://vimeo.com/52882780">this</a> for a slightly different point of view. We will only discuss how OAuth 2.0 relates to native mobile applications.<br /><br />The OAuth 2.0 specification defines 4 basic flows for getting an authorization token for a resource, and the two ones that don't require the client (in our&nbsp;scenario&nbsp;an Android app) to directly handle user credentials (Google account user name and password), namely the <a href="http://tools.ietf.org/html/rfc6749#section-4.1">authorization code grant flow</a> and the <a href="http://tools.ietf.org/html/rfc6749#section-4.2">implicit grant flow</a>, both have a common step that needs user interaction. They both require the authorization server (Google's) to authenticate the resource owner (the user of the our Android app) and establish whether they grant or deny the access request for the specified scope (e.g., read-only access to profile information). In a typical Web application that runs in a browser, this is very straightforward to do: the user is redirected to an authentication page, then to a access grant page that basically says 'Do you allow app X to access data Y and Z?', and if they agree, another redirect, which includes an authorization token, takes them back to the original application. The browser simply needs to pass on the token in the next request to gain access to the target resource. Here's an <a href="https://developers.google.com/accounts/docs/OAuth2Login#simpleexample" style="font-size: 1em;">official</a> Google example that uses the implicit flow: follow <a href="https://accounts.google.com/o/oauth2/auth?scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.profile&amp;state=%2Fprofile&amp;redirect_uri=https%3A%2F%2Foauth2-login-demo.appspot.com%2Foauthcallback&amp;response_type=token&amp;client_id=812741506391.apps.googleusercontent.com" style="font-size: 1em;">this link</a> and grant access as requested to let the demo Web app display your Google profile information. With a native app things are not that simple. It can either<br /><ul><li>use the system browser to handle the permission grant step, which would typically involve the following steps:</li><ul><li>launch the system browser and hope that the user will finish the authentication and permission grant process</li><li>detect success or failure and extract the authorization token from the browser on success (from the window title, redirect URL or the cookie store)</li><li>ensure that after granting access, the user ends up back in your app</li><li>finally, save the token locally and use it to issue the intended Web API request</li></ul><li>embed a <code>WebView</code> or a similar control in the apps's UI. Getting a token would generally involve these steps:</li><ul><li>in the app's UI, instruct the user what to do and load the login/authorization page</li><li>register for a 'page loaded' callback, and check for the final success URL each time it's called</li><li>when found, extract the token from the redirect URL or the <code>WebView</code>'s cookie jar and save it locally</li><li>finally use the token to send the intended API request</li></ul></ul>Neither is ideal, both are confusing to the user and to implement the first one on Android you might event have to (temporarily) start a Web server (<code>redirect_uri</code> is set to <code>http://localhost</code> in the API console, so you can't just use a custom scheme). The second one is generally preferable, if not pretty: here's an (somewhat outdated) <a href="https://sites.google.com/site/oauthgoog/oauth-practices/mobile-apps-for-complex-login-systems/samplecode">overview</a> of what needs to be done and a more <a href="http://blog.doityourselfandroid.com/2011/08/06/oauth-2-0-flow-android/">recent example</a>&nbsp;with full source code. This integration complexity and UI&nbsp;impedance&nbsp;mismatch are the problems that <a href="https://developers.google.com/google-apps/tasks/oauth-and-tasks-on-android">OAuth 2.0 support</a>&nbsp;via the <code>AccountManager</code> initially, and recently&nbsp;<a href="http://android-developers.blogspot.jp/2012/09/google-play-services-and-oauth-identity.html">Google Play Services</a> aim to solve. When using either of those, user authentication is implemented&nbsp;transparently&nbsp;by passing the saved master token (or encrypted password) to the server side component, and instead of a <code>WebView</code> with a permission grant page, you get the Android native access grant dialog. If you approve, a second request is sent to convey this and the returned access token is directly delivered to the requesting app. This is essentially the same flow as for Web applications, but has the advantages that it doesn't require context switching from native to browser and back, and is much more user friendly. Of course, it only works for Google accounts, so if you wanted to write, say, a Facebook client, you still have to use a <code>WebView</code> to process the access permission grant and get an authorization token.<br /><br />Now that we have an idea what authentication methods are available, let's see if we can use them to access an online Google service that doesn't have a dedicated API.<br /><h3>Google Web properties single sign-on</h3><div>Being able to access multiple related, but separate services without needing to authenticate to each one individually is generally referred to as single sign-on (<a href="http://en.wikipedia.org/wiki/Single_sign-on">SSO</a>). There are multiple standard ways to accomplish this for different contexts, ranging from&nbsp;<a href="http://en.wikipedia.org/wiki/Kerberos_(protocol)">Kerberos</a>&nbsp;to&nbsp;<a href="http://en.wikipedia.org/wiki/Security_Assertion_Markup_Language">SAML</a>-based solutions. We will use the term here in a narrower meaning: being able to use different Google services (Web sites or API's) after having authenticated to only one of them (including the Android login service). If you have a fairly fast Internet connection, you might not even notice it, but after you log in to, say, Gmail, clicking on YouTube links will take you to a completely different domain, and yet you will be able to comment on that neat cat video without having to log in again. If you have a somewhat slower connection and a wide display though, you may notice that there is a lot of redirecting and long parameter passing, with the occasional progress bar going on. What happens behind the scenes is that your current session cookies and authentication tokens are being exchanged for yet other tokens and more cookies, to let you seamlessly log in to that other site. If you are curious, you can observe the flow with Chrome's built-in developer tools (or similar plugins for other browsers), or check out our&nbsp;<a href="https://github.com/AndlyticsProject/andlytics/wiki/Developer-Console-v2---Login-Process">sample.</a>&nbsp;All of those requests and responses are essentially a proprietary SSO protocol (Google's), which is not really publicly documented anywhere, and, of course, is likely to change fairly often as Google rolls out upgrades to their services. With that said, there is a distinct pattern, and on a higher level you only have two main cases. We are deliberately ignoring the persistent cookie&nbsp;('Stay signed in')&nbsp;&nbsp;scenario for simplicity's sake.<br /><ul><li>Case 1: you haven't authenticated to any of the Google properties. If you access, for example, <code>mail.google.com</code> in that state you will get a login screen originating at <code>https://accounts.google.com/ServiceLogin</code> with parameters specifying the service you are trying to access ('mail' for Gmail) and where to send you after you are authenticated. After you enter your credentials, you will generally get redirected a few times around the <code>accounts.google.com</code>, which will set a few session cookies, common (<code>Domain=.google.com</code>) for all services (always SID and LSID, plus a few more). The last redirect will be to the originally requested service and include an authentication token in the redirected location (usually specified with the <code>auth</code> parameter, e.g.: <code>https://mail.google.com/mail/?auth=DQAAA...</code>). The target service will validate the token and set a few more service-specific sessions cookies, restricted by domain and path, and with the <code>Secure</code> and <code>HttpOnly</code> flags set. From there, it might take a couple of more redirects before you finally land at an actual content page.</li><li>Case 2: you have already authenticated to at least one service (Gmail in our example). In this state, if you open, say, Calendar, you will go through <code>https://accounts.google.com/ServiceLogin</code> again, but this time the login screen won't be shown. The accounts service will modify your SID and LSID cookies, maybe set a few new ones and finally redirect you the original service, adding an authentication token to the redirect location. From there the process is similar: one or more service-specific cookies will be set and you will finally be redirected to the target content.</li></ul><div>Those flows obviously work well for browser-based logins, but since we are trying to do this from an Android app, without requiring user credentials or showing WebView's, we have a different scenario. We can easily get a ClientLogin or an OAuth 2.0 token from the AccountManager, but since we are not preforming an actual Web login, we have no cookies to present. The question becomes: is there a way to log in with a standard token alone? Since tokens can be used with the data APIs (where available) of each service, they obviously contain enough information to authenticate us and grant access to the service's resources. What we need is an Web endpoint, that will take our token and give us a set of cookies we could use to access the corresponding Web site in exchange.&nbsp;Clues and traces of such a service are scattered around the Internet, mostly in the code of unofficial Google client libraries and applications. Once we know it is definitely possible, the next problem becomes getting it to work with Android's AccountManger.</div></div><h3>Logging in using <code>AccountManager</code></h3></div><div>The only real documentation&nbsp;we could find, besides code comments and READMEs of the unofficial Google client applications mentioned above, is a short Chromium OS <a href="http://www.chromium.org/chromium-os/chromiumos-design-docs/login">design document</a>. It tells us that the standard (at the time) login API for installed applications, ClientLogin, alone is not enough to accomplish Web SSO, and outlines a three step process that lets us exchange ClientLogin tokens for session cookies valid for a particular service:</div><div><ol><li>Get a ClientLogin token (this we can do via the <code>AccountManager</code>)</li><li>Pass it to <code>https://www.google.com/accounts/IssueAuthToken</code>, to get a one-time use, short-lived token that will authenticate the user to any service (the so called, 'ubertoken')</li><li>Finally, pass the ubertoken to <code>https://www.google.com/accounts/TokenAuth</code>, to exchange it for the full set of browser cookies we need to do SSO</li></ol>This outlines the process, but is a little light on the details. Fortunately, those can be found in the Chromium OS <a href="http://www.chromium.org/chromium-os/developer-guide#TOC-Get-the-Source">source code</a>, as well as a few other projects. After a fair bit of digging, here's what we uncovered:<br /><ol></ol></div><div><ol><li>To get the mythical ubertoken, you need to pass the SID and LSID cookies to the <code>IssueAuthToken</code> endpoint like this:<br /><pre>https://www.google.com/accounts/IssueAuthToken?service=gaia&amp;Session=false&amp;SID=sid&amp;LSID=lsid<br /></pre></li><li>The response will give you the ubertoken, which you pass to the <code>TokenAuth</code> endpoint along with the URL of the service you want to use:<br /><pre>https://www.google.com/accounts/TokenAuth?source=myapp&amp;auth=ubertoken&amp;continue=service-URL<br /></pre></li><li>If the token check out OK, the response will give you a URL to load. If your HTTP client is set up to follow redirects automatically, once you load it, needed cookies will be set automatically (just as in a browser), and you will finally land on the target site. As long as you keep the same session (which usually means the same HTTP client instance) you will be able to issue multiple requests, without needing to go through the authentication flow again.</li></ol><div>What remains to be seen is, can we implement this on Android. As usual, it turns out that there is more than one way to do it:</div></div><h4>The hard way</h4><div>The straightforward way would be to simply implement the flow outlined above using your favourite HTTP client library. We choose to use Apache HttpClient, which supports session cookies and multiple requests using a single instance out of the box. The first step calls for the SID and LSID <i>cookies</i> though, not an authentication token: we need cookies to get a token, in order to get more cookies. Since Android's <code>AccountManager</code> can only give us authentication tokens, and not cookies, this might seem like a hopeless catch-22 situation. However, while browsing the <code>authtokens</code> table of the system's accounts database <a href="http://nelenkov.blogspot.jp/2012/11/android-online-account-management.html">earlier</a>, we happened to notice that it actually had a bunch of tokens with type <code>SID</code> and <code>LSID</code>. Our next step is, of course, to try to request those tokens via the <code>AccountManager</code> interface, and this happens to work as expected:<br /><br /><pre>String sid = am.getAuthToken(account, "SID", null, activity, null, null)<br /> .getResult().getString(AccountManager.KEY_AUTHTOKEN);<br />String lsid = am.getAuthToken(account, "LSID", null, activity, null, null)<br /> .getResult().getString(AccountManager.KEY_AUTHTOKEN);<br /></pre><br />Having gotten those, the rest is just a matter of issuing two HTTP requests (error handling omitted for brevity):<br /><br /></div><div><pre>String TARGET_URL = "https://play.google.com/apps/publish/v2/";<br />Uri ISSUE_AUTH_TOKEN_URL = <br /> Uri.parse("https://www.google.com/accounts/IssueAuthToken?service=gaia&amp;Session=false");<br />Uri TOKEN_AUTH_URL = Uri.parse("https://www.google.com/accounts/TokenAuth");<br /><br />String url = ISSUE_AUTH_TOKEN_URL.buildUpon().appendQueryParameter("SID", sid)<br /> .appendQueryParameter("LSID", lsid)<br /> .build().toString();<br />HttpPost getUberToken = new HttpPost(url);<br />HttpResponse response = httpClient.execute(getUberToken);<br />String uberToken = EntityUtils.toString(entity, "UTF-8");<br />String getCookiesUrl = TOKEN_AUTH_URL.buildUpon()<br /> .appendQueryParameter("source", "android-browser")<br /> .appendQueryParameter("auth", authToken)<br /> .appendQueryParameter("continue", TARGET_URL)<br /> .build().toString();<br />HttpGet getCookies = new HttpGet(getCookiesUrl);<br />response = httpClient.execute(getCookies);<br /><br />CookieStore cookieStore = httpClient.getCookieStore();<br />// check for service-specific session cookie<br />String adCookie = findCookie(cookieStore.getCookies(), "AD");<br />// fail if not found, otherwise get page content<br />String responseStr = EntityUtils.toString(entity, "UTF-8");<br /></pre><br />This lets us authenticate to the Android Developer Console (version 2) site without requiring user credentials and we can easily proceed to parse the result and use it in a <a href="https://github.com/AndlyticsProject/andlytics/tree/dev-console-v2">native app</a>&nbsp;(warning: work in progress!) from here. The downside is that for this to work, the user has to grant access twice, for two cryptically looking token types (SID and LSID).<br /><br />Of course, after writing all of this, it turns out that the stock Android browser already has <a href="https://github.com/android/platform_packages_apps_browser/blob/master/src/com/android/browser/GoogleAccountLogin.java">code</a> that does it, which we could have used or at least referenced from the very beginning. Better yet, this find leads us to an yet easier way to accomplish our task.&nbsp;</div><h4>The easy way</h4><div>The easy way is found right next to the Browser class referenced above, in the <a href="https://github.com/android/platform_packages_apps_browser/blob/master/src/com/android/browser/DeviceAccountLogin.java">DeviceAccountLogin</a> class, so we can't really take any credit for this. It is hardly anything new, but some Googling suggests that it is neither widely known nor used much. You might have noticed that the Android browser is able to silently log you in to Gmail and friends, when you use the mobile site. The way this is implemented is via the 'magic' token type <code>'weblogin:'</code>. If you use it along with the service name and URL of the site you want to access, it will do all of the steps listed above automatically and instead of a token will give you a full URL you can load to get automatically logged in to your target service. This magic URL is in the format shown below, and includes both the ubertoken and the URL of the target site, as well as the service name (this example is for the Android Developer Console, line is broken for readability):<br /><br /><pre>https://accounts.google.com/MergeSession?args=service%3Dandroiddeveloper%26continue<br />%3Dhttps://play.google.com/apps/publish/v2/&amp;uberauth=APh...&amp;source=AndroidWebLogin<br /></pre><br />Here's how to get the <code>MergeSession</code> URL:<br /><br /><pre>String tokenType = "weblogin:service=androiddeveloper&amp;"<br />+ "continue=https://play.google.com/apps/publish/v2/";<br />String loginUrl = accountManager.getAuthToken(account,tokenType, false, null, null)<br /> .getResult().getString(AccountManager.KEY_AUTHTOKEN);<br /></pre><br />This is again for the Developer Console, but works for any Google site, including Gmail, Calendar and even the account management page. The only problem you might have is finding the service name, which is hardly obvious in some cases (e.g., 'grandcentral' for Google Voice and 'lh2' for Picasa).<br /><br />It takes only a single HTTP request form Android to get the final URL, which tells us that the token issuing flow is implemented on the server side. This means that you can also use the Google Play Services client library to issue a <code>weblogin:</code> 'token' (see screenshot below and note that unlike for OAuth 2.0 scopes, it shows the 'raw' token type). Probably goes without saying, but it also means that if you happen to come across someone's <code>accounts.db</code> file, all it takes to log in into their Google account(s) is two HTTPS requests: one to get the <code>MergeSession</code> URL, and one to log in to their accounts page. If you are thinking 'This doesn't affect me, I use Google two-factor authentication (2FA)!', you should know that in this case 2FA doesn't really help. Why? Because since Android doesn't support 2FA, to register an account with the <code>AccountManager</code> you need to use an application specific password (<i>Update: </i>On ICS and later, GLS will actually show a WebView and let you authenticate using your password and OTP. However, the OTP is not required once you get the master token). And once you have entered one, any tokens issued based on it, will just work (until you revoke it), without requiring entering an additional code. So if you value your account, keep your master tokens close and revoke them as soon as you suspect that your phone might be lost or stolen. Better yet, consider a solution that lets you wipe it remotely (which might not work after your revoke the tokens, so be sure to check how it works before you actually need it).<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-kRDpkY63cC8/UJtnKpeFyiI/AAAAAAAAJic/FGp5t7MnJBY/s1600/gps-weblogin.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://2.bp.blogspot.com/-kRDpkY63cC8/UJtnKpeFyiI/AAAAAAAAJic/FGp5t7MnJBY/s400/gps-weblogin.png" width="225" /></a></div><br />As we mentioned above, this is all ClientLogin based, which is <a href="http://googledevelopers.blogspot.jp/2012/04/changes-to-deprecation-policies-and-api.html">officially deprecated</a>, and might be going away soon (EOL scheduled for April 2013). But some of the Android Google data sync feeds still depend on ClientLogin, so if you use it you would probably OK for a while. Additionally, since the <code>weblogin:</code> implementation is server-based, it might be updated to conform with the latest (OAuth 2.0-based?) infrastructure without changing the client-side interface. In any case, watch the Android Browser and Chormium code to keep up to date.<br /><h3>Summary</h3>Google offers multiple online services, some with both a traditional browser-based interface and a developer-oriented API. Consequently, there are multiple ways to authenticate to those, ranging from form-based username and password login to authentication API's such as ClientLogin and OAuth 2.0. It is relatively straightforward to get an authentication token for services with a public API on Android, either using Android's native <code>AccountManager</code> interface or the newer Google Play Services extension. Getting the required session cookies to login automatically to the Web sites of services that do not offer an API is however neither obvious, nor documented. Fortunately, it is possible and very easy to do if you combine the special <code>'weblogin:' </code> token type with the service name and the URL of the site you want to use. The best available documentation about this is the Android Browser source code, which uses the same techniques to automatically log you in to Google sites using the account(s) already registered on your device.<br /><br />Moral of the story: interoperability is so much easier when you control all parties involved.<br /><br /></div>Nikolay Elenkovhttps://plus.google.com/117221066931981967754noreply@blogger.com11tag:blogger.com,1999:blog-2873091912851440312.post-52187944054062988402012-11-06T02:29:00.000+09:002012-12-03T13:23:59.729+09:00Android online account managementOur recent posts covered <a href="http://nelenkov.blogspot.jp/2012/10/emulating-pki-smart-card-with-cm91.html">NFC</a> and the <a href="http://nelenkov.blogspot.jp/2012/08/accessing-embedded-secure-element-in.html">secure</a> <a href="http://nelenkov.blogspot.jp/2012/08/android-secure-element-execution.html">element</a> as supported in recent Android versions, including <a href="http://nelenkov.blogspot.jp/2012/10/emulating-pki-smart-card-with-cm91.html">community ones</a>. In this two-part series we will&nbsp;take a completely different direction:&nbsp;managing online user accounts and&nbsp;accessing Web services. We will briefly discuss how Android manages user credentials and then show how to use cached authentication details to log in to most Google sites without requiring additional user input. Most of the functionality we shall discuss is hardly new -- it has been available at least since Android 2.0. But while there is ample&nbsp;documentation&nbsp;on how to use it, there doesn't see to be a 'bigger picture' overview of how the pieces are tied together. This somewhat detailed investigation was prompted by trying to develop an app for a widely used Google service that unfortunately doesn't have an official API and struggling to find a way to login to it using cached Google credentials. More on this in the <a href="http://nelenkov.blogspot.jp/2012/11/sso-using-account-manager.html">second part</a>, let's first see how Android manages accounts for online services.<br /><h3>Android account management</h3><div>Android 2.0 (API Level 5, largely non-existent, because it was quickly succeeded by 2.0.1, Level 6), introduced the concept of centralized account management with a public API. The central piece in the API is the <code><a href="http://developer.android.com/reference/android/accounts/AccountManager.html">AccountManager</a></code> class which, quote: 'provides access to a centralized registry of the user's online accounts. The user enters credentials (user name and password) once per account, granting applications access to online resources with "one-click" approval.' You should definitely read the full documentation of the class, which is quite extensive, for more details. Another major feature of the class is that it lets you get an authentication token for supported accounts, allowing third party applications to authenticate to online services without needing to handle the actual user password (more on this later). It also has a whole of 5 methods that allow you to get an authentication token, all but one with at least 4 parameters, so finding the one you need might take some time, with yet some more to get the parameters right. It might be a good idea to start with the synchronous <code><a href="http://developer.android.com/reference/android/accounts/AccountManager.html#blockingGetAuthToken(android.accounts.Account, java.lang.String, boolean)">blockingGetAuthToken()</a></code> and work your way from there once you have a basic working flow. On some older Android versions, the <code>AccountManager</code> would also monitor your SIM card and wipe cached credentials if you swapped cards, but fortunately this 'feature' has been <a href="http://code.google.com/p/android/issues/detail?id=17574">removed</a> in Android 2.3.4.<br /><br />The <code>AccountManager</code>, as most Android system API's, is just a facade for the <code>AccountManagerService</code>&nbsp;which does the actual work. The service doesn't provide an implementation for any particular form of authentication though. It only acts as a coordinator for a number of pluggable <i>authenticator modules</i> for different <i>account types</i> (Google, Twitter, Exchange, etc.). The best part is that any application can register an authentication module by implementing an <a href="http://developer.android.com/reference/android/accounts/AbstractAccountAuthenticator.html">account authenticator</a> and related classes, if needed. Android Training has a <a href="http://developer.android.com/training/id-auth/custom_auth.html">tutorial</a> on the subject that covers the implementation details, so we will not discuss them here. Registering a new account type with the system lets you take advantage of a number of Android infrastructure services:<br /><ul><li>centralized credential storage in a system database</li><li>ability to issue tokens to third party apps</li><li>ability to take advantage of Android's automatic background synchronization</li></ul>One thing to note is that while credentials (usually user names and passwords) are stored in a central database (<code>/data/system/accounts.db</code> or <code>/data/system/user/0/accounts.db</code> on Jelly Bean and later for the first system user), that is only accessible to system applications, credentials are in no way encrypted -- that is left to the authentication module to implement as necessary. If you have a rooted device (or use the emulator) listing the contents of the <code>accounts</code> table might be quite instructive: some of your passwords, especially for the stock Email application, will show up in clear text. While the <code>AccountManger</code> has a <code>getPassword()</code> method, it can only be used by apps with the same UID as the account's authenticator, i.e., only by classes in the same app (unless you are using <code>sharedUserId,</code> which is not recommended for non-system apps). If you want to allow third party applications to authenticate using your custom accounts, you have to issue some sort of authentication token, accessible via one of the many <code>getAuthToken()</code> methods. Once your account is registered with Android, if you implement an additional <i>sync adapter</i>, you can register to have it called at a specified interval and do background syncing for you app (one- or two-way), without needing to manage scheduling yourself. This is a very powerful feature that you get practically for free, and probably merits its own post. As we now have a basic understanding of authentication modules, let's see how they are used by the system. <br /><br />As we mentioned above, account management is coordinated by the&nbsp;<code>AccountManagerService</code>. It is a fairly complex piece of code (about 2500 lines in JB), most of the complexity stemming from the fact that it needs to communicate with services and apps that span multiple processes and threads within each process, and needs to take care of synchronization and delivering results to the right thread. If we abstract out the boilerplate code, what it does on a higher level is actually fairly straightforward:<br /><ul><li>on startup it queries the <code>PackageManager</code> to find out all registered authenticators, and stores references to them in a map, keyed by account type</li><li>when you add an account of a particular type, it saves its type, username and password to the <code>accounts</code> table</li><li>if you get, set or reset the password for an account, it accesses or updates the <code>accounts</code> table accordingly</li><li>if you get or set user data for the account, it is fetched from or saves to the <code>extras</code> table</li><li>when you request a token for a particular account, things become a bit more interesting:</li><ul><li>if a token with the specified type has never been issued before, it shows a confirmation activity asking (see screenshot below) the user to approve access for the requesting application. If they accept, the UID of the requesting app and the token type are saved to the <code>grants</code> table.</li><li>if a grant already exits, it checks the <code>authtoken</code> table for tokens matching the request. If a valid one exists, it is returned.</li><li>if a matching token is not found, it finds the authenticator for the specified account type in the map and calls its&nbsp;<code>getAuthToken()</code> method to request a token. This usually involves the authenticator fetching the username and password from the&nbsp;<code>accounts</code> table (via the <code>getPassword()</code> method) and calling its respective online service to get a fresh token. When one is returned, it gets cached in the <code>authtokens</code> table and then returned to the requesting app (usually asynchronously via a callback).</li></ul><li>if you invalidate a token, it gets deleted from the <code>authtokens</code> table</li></ul><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-2H_e7jCKEyQ/UJdvyERleHI/AAAAAAAAJiE/3ce0q6Mguyk/s1600/gls-gb-gant-screen.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://1.bp.blogspot.com/-2H_e7jCKEyQ/UJdvyERleHI/AAAAAAAAJiE/3ce0q6Mguyk/s400/gls-gb-gant-screen.png" width="240" /></a></div><div><br /></div><div>Now that we know how Android's account management system works, let's see how it is implemented for the most widely used account type.</div></div><h3>Google account management</h3><div><ul><ul></ul></ul>Usually the first thing you do when you turn on your brand new (or freshly wiped) 'Google Experience' Android device is to add a Google account. Once you authenticate successfully, you are offered to sync data from associated online services (GMail, Calendar, Docs, etc.) to your device. What happens behinds the scenes is that an account of type 'com.google' is added via the <code>AccountManager</code>, and a bunch of Google apps start getting tokens for the services they represent. Of course, all of this works with the help of an authentication provider for Google accounts. Since it plugs in the standard account management framework, it works by registering an authenticator implementation and using it involves the sequence outlined above. However, it is also a little bit special. Three main things make it different:<br /><ul><li>it is not part of any particular app you can install, but is bundled with the system</li><li>a lot of the actual functionality is implemented on the server side</li><li>it does not store passwords in plain text on the device</li></ul>If you have ever installed a community ROM built off AOSP code, you know that in order to get GMail and other Google apps to work on your device, you need a few bits not found in AOSP. Two of the required pieces are the Google Services Framework (GSF) and the Google Login Service (GLS). The former provides common services to all Google apps such as centralized settings and feature toggle management, while the latter implements the authentication provider for Google accounts and will be the topic of this section.<br /><br />Google provides a multitude of online services (not all of which survive for long), and consequently a bunch of <a href="https://developers.google.com/accounts/docs/GettingStarted">different methods</a> to authenticate to those. Android's Google Login Service, however doesn't call those public authentication API's directly, but via a dedicated online service, which lives at <code>android.clients.google.com</code>. It has endpoints both for authentication and authorization token issuing, as well as data feed (mail, calendar, etc.) synchronization, and more. As we shall see, the supported methods of authentication are somewhat different from those available via other public Google authentication API's. Additionally, it supports a few 'special' token types that greatly simplify some complex authentication flows.<br /><br />All of the above is hardly surprising: when you are dealing with online services it is only natural to have as much as possible of the authentication logic on the server side, both for ease of maintenance and to keep it secure. Still, to kick start it you need to store some sort of credentials on the device, especially when you support background syncing for practically everything and you cannot expect people to enter them manually. On-device credential management is one of the services GLS provides, so let's see how it is implemented. As mentioned above, GLS plugs into the system account framework, so cached credentials, tokens and associated extra data are stored in the system's <code>accounts.db</code> database, just as for other account types. Inspecting it reveals that Google accounts have a bunch of Base64-encoded strings associated with them. One of the user data entries (in the <code>extras</code> table) is helpfully labeled <code>sha1hash</code> (but does not exist on all Android versions) and the password (in the <code>accounts</code> table) is a long string that takes different formats on different Android versions. Additionally, the GSF database has a <code>google_login_public_key</code> entry, which when decoded suspiciously resembles a 1024-bit RSA public key. Some more experimentation reveals that credential management works differently on pre-ICS and post-ICS devices. On pre-ICS devices, GLS stores an encrypted version of your password and posts it to the server side endpoints both when authenticating for the first time (when you add the account) and when it needs to have a token for a particular service issued. On post-ICS devices, it only posts the encrypted password the first time, and gets a 'master token' in exchange, which is then stored on the device (in the <code>password</code> column of the <code>accounts</code> database). Each subsequent token request uses that master token instead of a password.<br /><br />Let's look into the cached credential strings a bit more. The encrypted password is 133 bytes long, and thus it is a fair bet that it is encrypted with the 1024-bit (128 bytes) RSA public key mentioned above, with some extra data appended. Adding multiple accounts that use the same password produces different password strings (which is a good thing), but the first few bytes are always the same, even on different devices. It turns out those identify the encryption key and are derived by hashing its raw value and taking the leading bytes of the resulting hash. At least from our limited sample of Android devices, it would seem that the RSA public key used is constant both across Android versions and accounts. We can safely assume that its private counterpart lives on the server side and is used to decrypt sent passwords before performing the actual authentication. The padding used is OAEP (with SHA1 and MGF1), which produces random-looking messages and is currently considered secure (at least when used in combination with RSA) against most advanced cryptanalysis techniques. It also has quite a bit of overhead, which in practice means that the GLS encryption scheme can encrypt at most 86 bytes of data. The outlined encryption scheme is not exactly military-grade and there is the issue of millions of devices most probably using the same key, but recovering the original password should be sufficiently hard to discourage most attackers. However, let's not forget that we also have a somewhat friendlier SHA1 hash available. It turns out it can be easily reproduced by 'salting' the Google account password with the account name (typically GMail address) and doing a single round of SHA1. This is considerably easier to do and it wouldn't be too hard to precompute a bunch of hashes based on commonly used or potential passwords if you knew the target account name.<br /><br />Fortunately, newer version of Android (4.0 and later) no longer store this hash on the device. Instead of the encrypted password+SHA1 hash combination they store an opaque 'master token' (most probably some form of OAuth token) in the password column and exchange it for authentication tokens for different Google services. It is not clear whether this token ever expires or if it is updated automatically. You can, however, revoke it manually by going to the <a href="https://accounts.google.com/b/0/IssuedAuthSubTokens">security settings</a> of your Google account and revoking access for the 'Android Login Service' (and a bunch of other stuff you never use while you are at it). This will force the user to re-authenticate on the device next time it tries to get a Google auth token, so it is also somewhat helpful if you ever lose your device and don't want people accessing your email, etc. if they manage to unlock it. The service authorization token issuing protocol uses some device-specific data in addition to the master token, so obtaining only the master token should not be enough to authenticate and impersonate a device (it can however be used to login into your Google account on the Web, see the <a href="http://nelenkov.blogspot.jp/2012/11/sso-using-account-manager.html">second part</a> for details).<br /><h3>Google Play Services</h3></div><div><a href="https://developers.google.com/android/google-play-services/">Google Play Services </a>(we'll abbreviate it to GPS, although the actual package is <code>com.google.android.gms</code>, guess where the 'M' came from) was announced at this year's Google I/O as an easy to use platform that offers integration with Google products for third-party Android apps. It was actually rolled out only a month ago, so it's probably not very widely used yet. Currently it provides support for OAuth 2.0 authorization to Google API's 'with a good user experience and security', as well some Google+ plus integration (sign-in and +1 button). Getting OAuth 2.0 tokens via the standard <code>AccountManager</code> interface has been supported for quite some time (though support was considered 'experimental') by using the special <code>'oauth2:scope'</code> token type syntax. However, it didn't work reliably across different Android builds, which have different GLS versions bundled and this results in slightly different behaviour. Additionally, the permission grant dialog shown when requesting a token was not particularly user friendly, because it showed the raw OAuth 2.0 scope in some cases, which probably means little to most users (see screenshot in the first section). While some human-readable aliases for certain scopes where introduced (e.g., 'Manage your taks' for 'oauth2:https://www.googleapis.com/auth/tasks'), that solution was neither ideal, nor universally available.<span style="background-color: white; color: #333333; font-family: monospace; font-size: 13px; line-height: 21px; white-space: pre;">&nbsp;</span>GPS solves this by making token issuing a two-step process (newer GLS versions also use this process):</div><div><ol><li>the first request is much like before: it includes the account name, master token (or encrypted password pre-ICS) and requested service, in the <code>'oauth2:scope'</code> format. GPS adds two new parameters: requesting app package name and app signing certificate SHA1 hash (more on this later). The response includes some human readable details about the requested scope and requesting application, which GPS shows in a permission grant dialog like the one shown below.</li><li>if the users grants the permission, this decision is recorded in the <code>extras</code> table in a proprietary format which includes the requesting app's package name, signing certificate hash, OAuth 2.0 scope and grant time (note that it is not using the <code>grants</code> table). GPS then resends the authorization request setting the <code>has_permission</code>&nbsp;parameter to 1. On success this results in an OAuth 2.0 token and its expiry date in the response. Those are cached in the <code>authtokens</code> table in a similar format.</li></ol><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-rKLJdGIn-Cg/UJdwLQXX2hI/AAAAAAAAJiM/kGrHLGT4dOQ/s1600/gps-grant-screen.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://1.bp.blogspot.com/-rKLJdGIn-Cg/UJdwLQXX2hI/AAAAAAAAJiM/kGrHLGT4dOQ/s400/gps-grant-screen.png" width="225" /></a></div><div><br /></div><div>To be able to actually use a Google API, you need to register your app's package name and signing key in Google's <a href="https://code.google.com/apis/console">API console</a>. The registration lets&nbsp;services validating the token query Google what app the token was issued for, and thus identify the calling app. This has one subtle, but important side-effect: you don't have to embed an API key in your app and send it with every request. Of course, for a third party published app you can easily find out both the package name and the signing certificate so it is not particularly hard to get a token issued in the name of some other app (not possible via the official API, of course). We can assume that there are some additional checks on the server side that prevent this, but theoretically, if you used such a token you could, for example, exhaust a third-party app's API request quota by issuing a bunch of requests over a short period of time.&nbsp;</div></div><div><br /></div><div>The actual GPS implementation seems to reuse much of the original Google Login Service authentication logic, including the password encryption method, which is still used on pre-ICS devices (the protocol is, after all, mostly the same and it needs to be able to use pre-existing accounts). On top of that it adds better OAuth 2.0 support, a version-specific account selection dialog and some prettier and more user friendly permission grant UIs. The GPS app has the Google apps shared UID, so it can directly interact with other proprietary Google services, including GLS and GSF. This allows it, among other things, to directly get and write Google account credentials and tokens to the accounts database. As can be expected, GPS runs in a remote service that the client library you link into your app accesses. The major selling point against the legacy <code>AccountManager</code> API is that while its underlying authenticator modules (GLS and GSF) are part of the system, and as such cannot be updated without an OTA, GPS is an user-installable app that can be easily updated via Google Play. Indeed, it is advertised as auto-updating (much like the Google Play Store client), so app developers presumably won't have to rely on users to update it if they want to use newer features (unless GPS is disabled altogether, of course). This update mechanism is to provide 'agility in rolling out new platform capabilities', but considering how much time the initial roll-out took, it is to be seen how agile the whole thing will turn out to be. Another thing to watch out for is feature bloat: besides OAuth 2.0 support, GPS currently includes G+ and AdMob related features, and while both are indeed Google-provided services, they are totally unrelated. Hopefully, GPS won't turn into a 'everything Google plus the kitchen sink' type of library, delaying releases even more. With all that said, if your app uses OAuth 2.0 tokens to authenticate to Google API's, which is currently the preferred method (ClientLogin, OAuth 1.0 and AuthSub have been <a href="http://googledevelopers.blogspot.jp/2012/04/changes-to-deprecation-policies-and-api.html">officially deprecated</a>), definitely consider using GPS over 'raw' <code>AccountManager</code> access.<br /><h3>Summary</h3></div><div>Android provides a centralized registry of user online accounts via the <code>AccountManager</code> class. It lets you both get tokens for existing accounts without having to handle the actual credentials and register your own account type, if needed. Registering an account type gives you access to powerful system features, such as authentication token caching and automatic background synchronization. 'Google experience' devices come with built-in support for Google accounts, which lets third party apps access Google online services without needing to directly request authentication information from the user. The latest addition to this infrastructure is the recently released Google Play Services app and companion client library, which aim to make it easy to use OAuth 2.0 from third party applications.&nbsp;</div><div><br /></div><div>We've now presented an overview of how the account management system works, and the next step is to show how to actually use it to access a real online service. That will be the topic of the <a href="http://nelenkov.blogspot.jp/2012/11/sso-using-account-manager.html">second article</a> in the series.&nbsp;</div>Nikolay Elenkovhttps://plus.google.com/117221066931981967754noreply@blogger.com2tag:blogger.com,1999:blog-2873091912851440312.post-83118220349432227382012-10-03T01:19:00.001+09:002012-10-04T17:01:10.074+09:00Emulating a PKI smart card with CyanogenMod 9.1We discussed the embedded <a href="http://nelenkov.blogspot.jp/2012/08/accessing-embedded-secure-element-in.html" target="_blank">secure element</a> available in recent Android devices, it's <a href="http://nelenkov.blogspot.jp/2012/08/android-secure-element-execution.html" target="_blank">execution environment</a> and how <a href="http://nelenkov.blogspot.jp/2012/08/exploring-google-wallet-using-secure.html" target="_blank">Google Wallet</a> makes use if it in the last series of articles. We also saw that unless you have a contract with Google and have them (or the TSM they use) distribute your applets to supported devices, there is currently no way to install anything on the embedded secure element. We briefly mentioned that <a href="http://www.cyanogenmod.com/" target="_blank">CyanogenMod</a> 9.1 <a href="http://www.cyanogenmod.com/blog/cyanogenmod9-1-and-simplytapp">supports</a> software card emulation and it is a more practical way to create your own NFC-enabled applications. We'll now see how software card emulation works and show how you can use it to create a simple PKI 'applet' that can be accessed via NFC from any machine with a contactless card reader.<br /><h3>Software card emulation</h3><div>We already know that if the embedded secure element is put in virtual mode it is visible to external readers as a contactless smartcard. Software card emulation (sometimes referred to as Host Card Emulation or HCE) does something very similar, but instead of routing commands received by the NFC controller to the SE, it delivers them to the application processor, and they can be processed by regular applications. Responses are then sent via NFC to the reader, and thus your app takes the role of a virtual contactless 'smartcard' (refer to <a href="http://www.medien.ifi.lmu.de/iwssi2012/papers/iwssi-spmu2012-roland.pdf" target="_blank">this paper</a> for a more thorough discussion). &nbsp;Software card emulation is currently available on BlackBerry phones, which offer standard <a href="http://www.blackberry.com/developers/docs/7.1.0api/net/rim/device/api/io/nfc/emulation/package-summary.html" target="_blank">APIs</a> for apps to register with the OS and process card commands received over NFC. Besides a BlackBerry device, you can use some contactless &nbsp;readers in emulation mode to emulate NFC tags or a full-featured smart card. Stock Android doesn't (yet) support software card emulation, even though the NFC controllers in most current phones have this capability. Fortunately, recent version of <a href="http://www.cyanogenmod.com/" target="_blank">CyanogenMod</a>&nbsp;integrate a <a href="http://r.cyanogenmod.com/#/q/status:merged+owner:doug,n,z">set of patches</a> that unlock this functionality of the&nbsp;<a href="http://www.nxp.com/documents/leaflet/75016890.pdf" target="_blank">PN544</a>&nbsp;NFC controller found in recent Nexus (and other) devices. Let's see how it works in a bit more detail.</div><div><h3>CyanogenMod implementation</h3>Android doesn't provide a direct interface to its NFC subsystem&nbsp;to user-level apps. Instead, it leverages the OS's intent and intent filter infrastructure to let apps register for a particular NFC event (<code><a href="http://developer.android.com/reference/android/nfc/NfcAdapter.html#ACTION_NDEF_DISCOVERED">ACTION_NDEF_DISCOVERED</a></code>, <code><a href="http://developer.android.com/reference/android/nfc/NfcAdapter.html#ACTION_TAG_DISCOVERED">ACTION_TAG_DISCOVERED</a></code> and <code><a href="http://developer.android.com/reference/android/nfc/NfcAdapter.html#ACTION_TECH_DISCOVERED">ACTION_TECH_DISCOVERED</a></code>) and specify additional filters based on tag type or features. When a matching NFC tag is found, interested applications are notified and one of them is selected to handle the event, either by the user or automatically if it is in the foreground and has registered for <a href="http://developer.android.com/guide/topics/connectivity/nfc/advanced-nfc.html#foreground-dispatch" target="_blank">foreground dispatch</a>. The app can then access a generic <code><a href="http://developer.android.com/reference/android/nfc/Tag.html">Tag</a></code> object representing the target NFC device and use it to retrieve a concrete <a href="http://developer.android.com/reference/android/nfc/tech/package-summary.html" target="_blank">tag technology</a> interface such as <code><a href="http://developer.android.com/reference/android/nfc/tech/MifareClassic.html">MifareClassic</a></code> or <code><a href="http://developer.android.com/reference/android/nfc/tech/IsoDep.html">IsoDep</a></code> that lets it communicate with the device and use its native features. Card emulation support in CyanogenMod doesn't attempt to change or amend Android's NFC architecture, but integrates with it by adding support for two new tag technologies: <code><a href="https://github.com/CyanogenMod/android_frameworks_base/blob/ics/core/java/android/nfc/tech/IsoPcdA.java">IsoPcdA</a></code> and <code><a href="https://github.com/CyanogenMod/android_frameworks_base/blob/ics/core/java/android/nfc/tech/IsoPcdB.java">IsoPcdB</a></code>. 'ISO'&nbsp;here is the <a href="http://www.iso.org/">International Organization for&nbsp;Standardization</a>, which among other things, is responsible for defining NFC communication standards. 'PCD' stands for Proximity Coupling Device, which is simply ISO-speak for a contactless reader. The two classes cover the two main NFC flavours in use today (outside of Japan, at least) -- Type A (based on NXP technology) and Type B (based on Motorolla technology). As you might have guessed by now, the patch reverses the usual roles in the Android NFC API: the external contactless reader is presented as a 'tag', and 'commands' you send from the phone are actually replies to the reader-initiated communication. If you have Google Wallet installed the embedded secure element is activated as well, so touching the phone to a reader would produce a potential conflict: should it route commands to the embedded SE or to applications than can handle <code>IsoPcdA/B</code> tags? The CyanogenMod patch handles this by using Android's native foreground dispatch mechanism: software card emulation is only enabled for apps that register for foreground dispatch of the relevant tag technologies. So unless you have an emulation app in the foreground, all communication would be routed to Google Wallet (i.e., the embedded SE). In practice though, starting up Google Wallet on ROMs with the current version of the patch might block software card emulation, so it works best if Google Wallet is not installed. A fix is <a href="http://r.cyanogenmod.com/#/c/23955/">available</a>, but not yet merged in&nbsp;CyanogenMod master (Updated: now merged, should roll out with CM10 nightlies) .<br /><br />Both of the newly introduced tag technologies extend <code>BasicTagTechnology</code> and offer methods to open, check and close the connection to the reader. They add a public <code>transceive()</code> method that acts as the main communication interface: it receives reader commands and sends the responses generated by your app to the PCD. Here's a summary of the interface: <br /><br /><pre>abstract class BasicTagTechnology implements TagTechnology {<br /> public boolean isConnected() {...}<br /> <br /> public void connect() throws IOException {...}<br /> <br /> public void reconnect() throws IOException {...}<br /> <br /> public void close() throws IOException {...}<br /><br /> byte[] transceive(byte[] data, boolean raw) throws IOException {...}<br />}<br /></pre><br />Now that we know (basically) how it works, let's try to use software card emulation in practice.<br /><h3>Emulating a contactless card</h3></div><div>As discussed in the previous section, to be able to respond to reader commands we need to register our app for one of the PCD tag technologies and enable foreground dispatch. This is no different than handling stock-supported &nbsp;NFC technologies. We need to add an intent filter and a reference to a technology filter file to the app's manifest:<br /><br /><pre>&lt;activity android:label="@string/app_name" <br /> android:launchmode="singleTop"<br /> android:name=".MainActivity"<br /> &lt;intent-filter&gt;<br /> &lt;action android:name="android.nfc.action.TECH_DISCOVERED" /&gt;<br /> &lt;/intent-filter&gt;<br /><br /> &lt;meta-data android:name="android.nfc.action.TECH_DISCOVERED" <br /> android:resource="@xml/filter_nfc" /&gt;<br />&lt;/activity&gt;<br /></pre><br />We register the&nbsp;<code>IsoPcdA</code>&nbsp;tag technology in <code>filter_nfc.xml</code>: <br /><br /><pre>&lt;resources&gt;<br /> &lt;tech-list&gt;<br /> &lt;tech&gt;android.nfc.tech.IsoPcdA&lt;/tech&gt;<br /> &lt;/tech-list&gt;<br />&lt;/resources&gt;<br /></pre><br />And then use the same technology list to register for foreground dispatch in our activity: <br /><br /><pre>public class MainActivity extends Activity {<br /><br /> public void onCreate(Bundle savedInstanceState) {<br /> pendingIntent = PendingIntent.getActivity(this, 0, new Intent(this,<br /> getClass()).addFlags(Intent.FLAG_ACTIVITY_SINGLE_TOP), 0);<br /> filters = new IntentFilter[] { new IntentFilter(<br /> NfcAdapter.ACTION_TECH_DISCOVERED) };<br /> techLists = new String[][] { { "android.nfc.tech.IsoPcdA" } };<br /> }<br /> <br /> public void onResume() {<br /> super.onResume();<br /> if (adapter != null) {<br /> adapter.enableForegroundDispatch(this, pendingIntent, filters,<br /> techLists);<br /> }<br /> }<br /><br /> public void onPause() {<br /> super.onPause();<br /> if (adapter != null) {<br /> adapter.disableForegroundDispatch(this);<br /> }<br /> }<br /><br />}<br /></pre><br />With this in place, each time the phone is touched to an active reader, we will get notified via the activity's&nbsp;<code>onNewIntent()</code> method. We can get a reference to the <code>Tag</code> object using the intent's extras as usual. However, since neither <code>IsoPcdA</code> nor its superclass are part of the public SDK, we need to either build the app as part of CyanogenMod's source, or, as usual, resort to reflection. We choose to create a simple wrapper class that calls <code>IsoPcdA</code> methods via reflection, after getting an instance using the static <code>get()</code> method like this: <br /><br /><pre>Class cls = Class.forName("android.nfc.tech.IsoPcdA");<br />Method get = cls.getMethod("get", Tag.class);<br />// this returns an IsoPcdA instance<br />tagTech = get.invoke(null, tag);<br /></pre><br />Now after we <code>connect()</code> we can use the <code>transceive()</code> method to reply to reader commands. Note that since the API is not event-driven, you won't get notified with the reader command automatically. You need to send a dummy payload to retrieve the first reader command APDU. This can be a bit awkward at first, but you just have to keep in mind that each time you call <code>transceive()</code> the next reader command comes in via the return value. Unfortunately this means that after you send your last response, the thread will block on I/O waiting for <code>transceive()</code> to return, which only happens after the reader sends its next command, which might be never. The thread will only stop if an exception is thrown, such as when communication is lost after separating the phone from the reader. Needless to say, this makes writing robust code a bit tricky. Here's how to start off the communication: <br /><br /><pre>// send dummy data to get first command APDU<br />// at least two bytes to keep smartcardio happy<br />byte[] cmd = transceive(new byte[] { (byte) 0x90, 0x00 });<br /></pre></div><div><h3>Writing a virtual PKI applet</h3>Software card emulation in CyanogneMod is limited to&nbsp;ISO 14443-4 (used mostly for APDU-based communication), which means that you cannot emulate cards that operate on a lower-level protocol such as MIFARE Classic. This leaves out opening door locks that rely on the card UID with your phone (the UID of the emulated card is random) or getting a free ride on the subway (you cannot clone a traffic card with software alone), but allows for emulating payment (EMV) cards which use an APDU-based protocol. In fact, the first commercial application (<a href="http://www.simplytapp.com/about.html" target="_blank">company</a> started by patch author&nbsp;Doug Yeager<span style="background-color: white; color: #333333; font-family: Georgia, Times, 'Times New Roman', serif; font-size: 14px; line-height: 21px;">)&nbsp;</span>that makes use of Android software card emulation, <a href="https://play.google.com/store/apps/details?id=com.tapp" target="_blank">Tapp</a>, emulates a contactless Visa card and does all necessary processing 'in the cloud', i.e., on a remote server. Payment applications are the ones most likely to be developed using software card emulation because of the potentially higher revenue: at least one other company has announced that it is building a&nbsp;<a href="http://www.nfcworld.com/2012/09/25/318059/inside-secure-to-offer-cloud-based-nfc-secure-element-solution/" target="_blank">cloud-based NFC secure element</a>. We, however, will look at a different use case: PKI.</div><br />PKI has been getting a lot of bad rep due to major CAs getting compromised every other month, and it has been stated multiple times that it <a href="http://www.imperialviolet.org/2011/03/18/revocation.html" target="_blank">doesn't really work</a> on the Internet. It is however still a valid means of authentication in a corporate environment where&nbsp;personal certificates are used for anything from desktop login to remote VPN access. Certificates and associated private keys are often distributed on smart cards, sometimes contactless or dual-interface. Since Android now has standard <a href="http://nelenkov.blogspot.jp/2011/11/using-ics-keychain-api.html" target="_blank">credential storage</a> which can be <a href="http://nelenkov.blogspot.jp/2012/07/jelly-bean-hardware-backed-credential.html" target="_blank">protected by hardware</a> on supported devices, we could use an Android phone with software card emulation in place of a PKI card. Let's try to write a simple PKI 'applet' and an associated host-side client application to see if this is indeed feasible.<br /><div><br /></div><div>A PKI JavaCard applet can offers various features, but the essential ones are:<br /><ul><li>generating or importing keys</li><li>importing a public key certificate</li><li>user authentication (PIN verification)</li><li>signing and/or encryption with card keys</li></ul>Since we will be using Android's credential storage to save keys and certificates, we already have the first two features covered. All we need to implement is PIN verification and signing (which is actually sufficient for most applications, including desktop login and SSL client authentication). If we were building a real solution, we would implement a well known applet protocol, such as one of a major vendor or an open one, such as the <a href="http://www.linuxnet.com/musclecard/files/mcardprot-1.2.1.pdf" target="_blank">MUSCLE card</a>&nbsp;protocol, so that we can take advantage of desktop tools and cryptographic libraries (Windows CSPs and PKCS#11 modules, such as <a href="http://www.opensc-project.org/opensc" target="_blank">OpenSC</a>). But since this is a proof-of-concept exercise, we can get away by defining our own mini-protocol and only implement the bare minimum. We define the applet AID (quite arbitrary, and may be in already in use by someone else, but there is really no way to check) and two commands: <code>VERIFY PIN</code>&nbsp;and <code>SIGN DATA</code>. The protocol is summarized in the table below:<br /><br /><table class="table"><caption>Virtual PKI applet protocol</caption> <thead><tr> <th>Command</th><th>CLA</th><th>INS</th><th>P1</th><th>P2</th><th>Lc</th><th>Data</th><th>Response</th> </tr></thead> <tbody><tr> <td>SELECT</td><td>00</td><td>A4</td><td>04</td><td>00</td><td>06</td><td>AID: A0000000010110</td><td>9000/6985/6A82/6F00</td> </tr><tr> <td>VERIFY PIN</td><td>80</td><td>01</td><td>XX</td><td>XX</td><td>PIN length (bytes)</td><td>PIN characters (ASCII)</td><td>9000/6982/6985/6F00</td> </tr><tr> <td>SIGN DATA</td><td>80</td><td>02</td><td>XX</td><td>XX</td><td>Signed data length (bytes)</td><td>Signed data</td><td>9000+signature bytes/6982/6985/6F00</td> </tr></tbody> </table></div><br /><div>The applet behaviour is rather simple: it returns a generic error if you try to send any commands before selecting it, and then requires you to authenticate by verifying the PIN before signing data. To implement the applet, we first handle new connections from a reader in the main activity's <code>onNewIntent()</code> method, where we receive an <code>Intent</code> containing a reference to the <code>IsoPcdA</code> object we use to communicate with the PCD. We verify that the request comes from a card reader, create a wrapper for the <code>Tag</code> object, <code>connect()</code> to the reader and finally pass control to the <code>PkiApplet</code> by calling it's <code>start()</code> method. <br /><br /><pre>Tag tag = (Tag) intent.getExtras().get(NfcAdapter.EXTRA_TAG);<br />List<string> techList = Arrays.asList(tag.getTechList());<br />if (!techList.contains("android.nfc.tech.IsoPcdA")) {<br /> return;<br />}<br /><br />TagWrapper tw = new TagWrapper(tag, "android.nfc.tech.IsoPcdA");<br />if (!tw.isConnected()) {<br /> tw.connect();<br />}<br /><br />pkiApplet.start(tw);<br /></string></pre><br />The applet in turn starts a background thread that reads commands until available and exits if communication with the reader is lost. The implementation is not terribly robust, but is works well enough for our POC:<br /><br /><pre>Runnable r = new Runnable() {<br /> public void run() {<br /> try {<br /> // send dummy data to get first command APDU<br /> byte[] cmd = transceive(new byte[] { (byte) 0x90, 0x00 });<br /> do {<br /> // process commands<br /> } while (cmd != null &amp;&amp; !Thread.interrupted());<br /> } catch (IOException e) {<br /> // connection with reader lost<br /> return;<br /> }<br /> }<br />};<br /><br />appletThread = new Thread(r);<br />appletThread.start();<br /></pre><br /></div><br /><div>Before the applet can be used it needs to be 'personalized'. In our case this means importing the private key the applet will use for signing and setting a PIN. To initialize the private key we import a PKCS#12 file using the <code><a href="http://developer.android.com/reference/android/security/KeyChain.html">KeyChain</a></code> API and store the private key alias in shared preferences. The PIN is protected using 5000 iterations of PBKDF2 with a 64-bit salt. We store the resulting PIN hash and the salt in shared preferences as well and repeat the calculation against the PIN we receive from applet clients to check if it matches. This avoids storing the PIN in clear text, but keep in mind that a short numeric-only PIN can be brute-forced in minutes (the app doesn't restrict PIN size, it can be up to 255 characters (bytes), the maximum size of APDU data). Here's how our 'personalization' UI looks like:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-vcFW4yih0ew/UGm204TXwJI/AAAAAAAAJLI/Vw_5OHaVrxA/s1600/pki-emulator-not-initialized.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://1.bp.blogspot.com/-vcFW4yih0ew/UGm204TXwJI/AAAAAAAAJLI/Vw_5OHaVrxA/s400/pki-emulator-not-initialized.png" width="225" /></a></div><br />To make things simple, applet clients send the PIN in clear text, so it could theoretically be sniffed if NFC traffic is intercepted. This can be avoided by using some sort of a challenge-response mechanism, similar to what 'real' (e.g., EMV) cards do. Once the PIN is verified, clients can send the data to be signed and receive the signature bytes in the response. Since the size of APDU data is limited to 255 bytes (due to the single byte length field) and the applet doesn't support any sort of chaining, we are limited to using RSA keys up to 1024 bits long (a 2048-bit key needs 256 bytes). The actual applet implementation is quite straightforward: it does some minimal checks on received APDU commands, gets the PIN or signed data and uses it to execute the corresponding operation. It then selects a status code based on operation success or failure and returns it along with the result data in the response APDU. See the <a href="https://github.com/nelenkov/virtual-pki-card/tree/master/se-emulator">source code</a> for details.<br /><h3>Writing a host-side applet client</h3>Now that we have an applet, we need a host-side client to actually make use of it. As we mentioned above, for a real-world implementation this would be a standard PKCS#11 or CSP module for the host operating system that plugs into PKI-enabled applications such as browsers or email and VPN clients. We'll however create our own test Java client using the <a href="http://jcp.org/en/jsr/detail?id=268" target="_blank">Smart Card I/O API</a> (JSR 268). This API comes with Sun/Oracle Java SDKs since version 1.6 (Java 6), but is not officially a part of the SDK, because it is apparently not 'of sufficiently wide interest' according to the JSR expert group (committee BS at its best!). Eclipse goes as far as to flag it as a 'forbidden reference API', so you'll need to change error handling preferences to compile in Eclipse. In practice though, JSR 268 is a standard API that works fine on Windows, Solaris, Linux an Mac OS X (you may have to set the <code>sun.security.smartcardio.library</code> system property to point to your system's PC/SC library), so we'll use it for our POC application. The API comes with classes representing card readers, the communication channel and command and response APDUs. After we get a reference to a reader and then a card, we can create a channel and exchange APDUs with the card. Our PKI applet client is a basic command line program that waits for card availability and then simply sends the <code>SELECT</code>, <code>VERIFY PIN</code> and <code>SIGN DATA</code> commands in sequence, bailing out on any error (card response with status different from <code>0x9000</code>). The PIN is specified in the first command line parameter and if you pass a certificate file path as the second one, it will use it to verify the signature it gets from the applet. See <a href="https://github.com/nelenkov/virtual-pki-card/tree/master/se-pki-client">full code</a> for details, but here's how to connect to a card and send a command:<br /><br /><pre>TerminalFactory factory = TerminalFactory.getDefault();<br />CardTerminals terminals = factory.terminals();<br /><br />Card card = waitForCard(terminals);<br />CardChannel channel = card.getBasicChannel();<br />CommandAPDU cmd = new CommandAPDU(CMD);<br />ResponseAPDU response = channel.transmit(cmd);<br /><br />Card waitForCard(CardTerminals terminals)<br /> throws CardException {<br /> while (true) {<br /> for (CardTerminal ct : terminals<br /> .list(CardTerminals.State.CARD_INSERTION)) {<br /> return ct.connect("*");<br /> }<br /> terminals.waitForChange();<br /> }<br />}<br /></pre><br />And to prove that this all works, here's the output from a test run of the client application: <br /><br /><pre>$ ./run.sh 1234 mycert.crt <br />Place phone/card on reader to start<br />--&gt; 00A4040006A0000000010101<br />&lt;-- 9000<br />--&gt; 800100000431323334<br />&lt;-- 9000<br />--&gt; 80020000087369676E206D6521<br />&lt;-- 11C44A5448... 9000 (128)<br /><br />Got signature from card: 11C44A5448...<br />Will use certificate from 'mycert.crt' to verify signature<br /> Issuer: CN=test-CA, ST=Tokyo, C=JP<br /> Subject: CN=test, ST=Tokyo, C=JP<br /> Not Before: Wed Nov 30 00:04:31 JST 2011<br /> Not After: Thu Nov 29 00:04:31 JST 2012<br /><br />Signature is valid: true<br /></pre><br />This software implementation comes, of course, with the disadvantage that while the actual private key might be protected by Android's system key store, PIN verification and other operations not directly protected by the OS will be executed in a regular app. An Android app, unlike a dedicated smart card, could be compromised by other (malicious) apps with sufficient privileges. However, since recent Android devices do have (some) support for a Trusted Execution Environment (TEE), the sensitive parts of our virtual applet can be implemented as Trusted Application (TA) running within the TEE. The user-level app would then communicate with the TA using the controlled TEE interface, and the security level of the system could come very close to running an actual applet in a dedicated SE.</div><h3>Summary</h3><div>Android already supports NFC card emulation using an embedded SE (stock Android) or the UICC (various vendor firmwares). However, both of those are tightly controlled by their owning entities (Google or MNOs), and there is currently no way for third party developers to install applets and create card emulation apps. An alternative to SE-based card emulation is software card emulation, where an user-level app processes reader commands and returns responses via the NFC controller. This is supported by commonly deployed NFC controller chips, but is not implemented in the stock Andorid NFC subsystem. Recent versions of CyanogneMod however do enable it by adding support for two more tag technologies (<code>IsoPcdA</code> and <code>IsoPcdB</code>) that represent contactless readers instead of actual tags. This allows Android applications to emulate pretty much any ISO 14443-4 compliant contactless card application: from EMV payment applications to any custom JavaCard applet. We presented a sample app that emulates a PKI card, allowing you to store PKI credentials on your phone and potentially use it for desktop login or VPN access on any machine equipped with a contacltess reader. Hopefully software card emulation will become a part of stock Android in the future, making this and other card emulation NFC applications mainstream.</div>Nikolay Elenkovhttps://plus.google.com/117221066931981967754noreply@blogger.com58tag:blogger.com,1999:blog-2873091912851440312.post-49488103821004985552012-08-27T22:19:00.000+09:002012-10-03T13:05:57.426+09:00Exploring Google Wallet using the secure element interfaceIn the <a href="http://nelenkov.blogspot.jp/2012/08/accessing-embedded-secure-element-in.html" target="_blank">first post</a> of this series we showed how to use the embedded secure element interface Android 4.x offers. <a href="http://nelenkov.blogspot.jp/2012/08/android-secure-element-execution.html" target="_blank">Next</a>, we used some <a href="http://www.globalplatform.org/" target="_blank">GlobalPlatform</a> commands to find out more about the SE execution environment in the Galaxy Nexus. We also showed that there is currently no way for third parties to install applets on the SE.&nbsp;Since installing our own applets is not an option, we will now find some pre-installed applets to explore. Currently the only generally available Android application that is known to install applets on the SE is Google's own&nbsp;<a href="https://play.google.com/store/apps/details?id=com.google.android.apps.walletnfcrel" target="_blank">Google Wallet</a>. In this last post, we'll say a few words about how it works and then try to find out what publicly available information its applets host.<br /><h3>Google Wallet and the SE</h3>To quote the Google Play description,&nbsp;'Google Wallet holds your credit and debit cards, offers, and rewards cards'. How does it do this in practice though? The short answer: it's slightly complicated. The longer answer: only Google knows all the details, but we can observe a few things. After you install the Google Wallet app on your phone and select an account to use with it, it will contact the <a href="http://www.google.com/wallet/" target="_blank">online Google Wallet</a> service (previously known as Google Checkout), create or verify your account and then provision your phone. The provisioning process will, among other things, use&nbsp;<a href="http://www.firstdata.com/" target="_blank">First Data</a>'s Trusted Service Manager (TSM)&nbsp;<a href="http://www.firstdata.com/en_us/products/merchants/mobile-commerce/google-wallet.html" target="_blank">infrastructure</a>&nbsp;to download, install and personalize a bunch of applets on your phone. This is all done via the Card Manager and the payload of the commands is, of course, encrypted. However, the GP Secure Channel only encrypts the data part of APDUs, so it is fairly easy to map the install sequence on a device modified to log all SE communication. There are three types of applets installed: a Wallet controller applet, a MIFARE manager applet, and of course payment applets that enable your phone to interact with NFC-enabled&nbsp;<a href="http://www.mastercard.us/paypass.html#/home/" target="_blank">PayPass</a>&nbsp;terminals.<br /><br />The controller applet securely stores Google Wallet state and event log data, but most importantly, it enables or disables contactless payment functionality when you unlock the Wallet app by entering your PIN. The latest version seems to have the ability to store and verify a PIN securely (inside the SE), however it does not appear it is actually used by the app yet, since the&nbsp;<a href="https://github.com/rubixconsulting/WalletCracker/" target="_blank">Wallet Cracker</a>&nbsp;can still recover the PIN on a rooted phone. This implies that the PIN hash is still stored in the app's local database.<br /><br />The MIFARE manager applet works in conjunction with the offers and reward/loyalty cards features of Wallet. When you save an offer or add a loyalty card, the MIFARE manager applet will write block(s) to the emulated MIFARE 4K Classic card to mirror the offer or card on the SE, letting you redeem it by tapping your phone at a NFC-enabled POS terminal. It also keeps an application directory (similar to the standard MIFARE&nbsp;<a href="http://www.nxp.com/documents/application_note/AN10787.pdf" target="_blank">MAD</a>) in the last sectors, which is updated each time you add or remove a card. The emulated MIFARE card uses custom sector protection keys, which are most probably initialized during the initial provisioning process. Therefore you cannot currently read the contents of the MIFARE card with an external reader. However, the encryption and authentication scheme used by MIFARE Classic has&nbsp;been&nbsp;<a href="http://www.cs.ru.nl/~flaviog/publications/Attack.MIFARE.pdf" target="_blank">broken</a>&nbsp;and proven&nbsp;<a href="http://en.wikipedia.org/wiki/MIFARE#cite_note-22" target="_blank">insecure</a>, and the keys can be recovered easily with readily available tools. It would be interesting to see if the emulated card is susceptible to the same attacks.<br /><br />Finally, there should be one or more&nbsp;<a href="http://en.wikipedia.org/wiki/EMV" target="_blank">EMV</a>-compatible payment applets that enable you to pay with your phone at compatible POS terminals. EMV is an&nbsp;interoperability standard for payments using chip cards, and while each credit card company has their proprietary extensions, the common specifications are&nbsp;<a href="http://www.emvco.com/specifications.aspx" target="_blank">publicly available</a>. The EMV standard specifies how to find out what payment applications are installed on a contactless card, and we will use that information to explore Google Wallet further later.<br /><br />Armed with that basic information we can now extend our program to check if Google Wallet applets are installed. Google Wallet has been around for a while, so by now the controller and MIFARE manager applets' AIDs are widely known. However, we don't need to look further than latest AOSP code, since the system NFC service has those hardcoded. This clearly shows that while SE access code is being gradually made more open, its main purpose for now is to support Google Wallet. The controller AID is&nbsp;<code>A0000004762010</code>&nbsp;and the MIFARE manager AID is <code>A0000004763030</code>. As you can see, they start with the same prefix (<code>A000000476</code>), which we can assume is the Google RID (there doesn't appear to be a public RID registry). Next step is, of course, trying to select those. The MIFARE manager applet responds with a boring&nbsp;<code>0x9000</code>&nbsp;status which only shows that it's indeed there, but selecting the controller applet returns something more interesting:<br /><br /><pre>6f 0f -- File Control Information (FCI) Template<br /> 84 07 -- Dedicated File (DF) Name<br /> a0 00 00 04 76 20 10 (BINARY)<br /> a5 04 -- File Control Information (FCI) Proprietary Template<br /> 80 02 -- Response Message Template Format 1<br /> 01 02 (BINARY)<br /></pre><br />The 'File Control Information' and 'Dedicated File' names are file system-based card legacy terms, but the DF (equivalent to a directory) is the AID of the controller applet (which we already know), and the last piece of data is something new. Two bytes looks very much like a short value, and if we convert this to decimal we get '258', which happens to be the controller applet version displayed in the 'About' screen of the current Wallet app ('v258').<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-OOYXhj2s6wE/UDs1HtHG5YI/AAAAAAAAIIY/vhH3B7BmynE/s1600/wallet-info.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://1.bp.blogspot.com/-OOYXhj2s6wE/UDs1HtHG5YI/AAAAAAAAIIY/vhH3B7BmynE/s400/wallet-info.png" width="225" /></a></div><br />Now that we have an app that can check for wallet applets (see&nbsp;<a href="https://github.com/nelenkov/android-se-access" target="_blank">sample code</a>, screenshot above), we can verify if those are indeed managed by the Wallet app. It has a 'Reset Wallet' action on the Settings screen, which claims to delete 'payment information, card data and transaction history', but how does it affect the controller applets? Trying to select them after resetting Wallet shows that the controller applet has been removed, while the MIFARE manager applet is still selectable. We can assume that any payment applets have also been removed, but we still have no way to check. This leads us to the topic of our next section:<br /><h3>Exploring Google Wallet EMV applets</h3><div>Google Wallet is compatible with PayPass terminals, and as such should follow relevant specifications. For contactless cards those are defined in the <i>EMV Contactless Specifications for Payment Systems</i> series of 'books'. Book A defines the overall architecture, Book B -- how to find and select a payment application, Book C -- the rules of the actual transaction processing for each 'kernel' (card company-specific processing rules), and Book D -- the underlying contactless communication protocol. We want to find out what payment applets are installed by Google Wallet, so we are most interested in Book B and the relevant parts of Book C.</div><div><br /></div><div>Credit cards can host multiple payment applications, for example for domestic and international payment. Naturally, not all POS terminals know of or are compatible with all applications, so cards keep a public EMV app registry at a well known location. This practice is optional for contact cards, but is mandatory for contactless cards. The application is called 'Proximity Payment System Environment' (PPSE) and selecting it will be our first step. The application's AID is derived from the name: '2PAY.SYS.DDF01', which translates to <code>'325041592E5359532E444446303131'</code> in hex. Upon successful selection it returns a TLV data structure that contains the AIDs, labels and priority indicators of available applications (see Book B, 3.3.1&nbsp;<i>PPSE Data for Application Selection</i>). To process it, we will use and slightly extend the&nbsp;<a href="http://code.google.com/p/javaemvreader/" target="_blank">Java EMV Reader</a>&nbsp;library, which does similar processing for contact cards. The library uses the standard Java Smart Card I/O <a href="http://docs.oracle.com/javase/7/docs/jre/api/security/smartcardio/spec/index.html" target="_blank">API</a> to communicate with cards, but as we pointed out in the first article, this API is not available on Android. Card communication interfaces are nicely abstracted, so we only need to implement them using Android's native <code>NfcExecutionEnvironment</code>. The main classes we need are <code>SETerminal</code>, which creates a connection to the card, <code>SEConnection</code> to handle the actual APDU exchange, and <code>SECardResponse</code> to parse the card response into status word and data bytes. As an added bonus, this takes care of encapsulating our uglish reflected code. We also create a <code>PPSE</code> class to parse the PPSE selection response into its components. With all those in place all we need to do is follow the EMV specification. Selecting the PPSE with the following command works at first try, but produces a response with 0 applications:<br /><br /><pre>--&gt; 00A404000E325041592E5359532E4444463031<br />&lt;-- 6F10840E325041592E5359532E4444463031 9000<br />response hex :<br /> 6f 10 84 0e 32 50 41 59 2e 53 59 53 2e 44 44 46<br /> 30 31<br /> response SW1SW2 : 90 00 (Success)<br /> response ascii : o...2PAY.SYS.DDF01<br /> response parsed :<br /> 6f 10 -- File Control Information (FCI) Template<br /> 84 0e -- Dedicated File (DF) Name<br /> 32 50 41 59 2e 53 59 53 2e 44 44 46 30 31 (BINARY)<br /></pre><br />We have initialized the $10 prepaid card available when first installing Wallet, so&nbsp;<i>something</i>&nbsp;must be there. We know that the controller applet manages payment state, so after starting up and unlocking Wallet we finally get more interesting results (shown parsed and with some bits masked below). It turns out that locking the Wallet up effectively hides payment applications by deleting them from the PPSE. This, in addition to the fact that card emulation is available only when the phone's screen is on, provides better card security than physical contactless cards, some of which can easily be read by simply using a NFC-equipped mobile phone, as has been&nbsp;<a href="https://viaforensics.com/mobile-security-category/uk-channel-4-news-demo-contactless-payment-cards.html" target="_blank">demonstrated</a>.<br /><br /><pre>Applications (2 found):<br /> Application<br /> AID: a0 00 00 00 04 10 10 AA XX XX XX XX XX XX XX XX<br /> RID: a0 00 00 00 04 (Mastercard International [US])<br /> PIX: 10 10 AA XX XX XX XX XX XX XX XX<br /> Application Priority Indicator<br /> Application may be selected without confirmation of cardholder<br /> Selection Priority: 1 (1 is highest)<br /> Application<br /> AID: a0 00 00 00 04 10 10<br /> RID: a0 00 00 00 04 (Mastercard International [US])<br /> PIX: 10 10<br /> Application Priority Indicator<br /> Application may be selected without confirmation of cardholder<br /> Selection Priority: 2 (1 is highest)<br /></pre><br />One of the applications is the <a href="http://en.wikipedia.org/wiki/EMV#Application_selection" target="_blank">well known</a> MasterCard credit or debit application, and there is another MasterCard app with a longer AID and higher priority (1, the highest). The <a href="http://googlecommerce.blogspot.com/2012/08/use-any-credit-or-debit-card-with.html" target="_blank">recently announced</a> update to Google Wallet allows you to link practically any card to your Wallet account, but transactions are processed by a <a href="http://support.google.com/wallet/bin/answer.py?hl=en&amp;answer=2701024" target="_blank">single 'virtual'</a> MasterCard and then billed back to your actual credit card(s). It is our guess that the first application in the list above represents this virtual card. The next step in the EMV transaction flow is selecting the preferred payment app, but here we hit a snag: selecting each of the apps always fails with the <code>0x6999</code> ('Applet selection failed') status. It has been reported that this was possible in previous versions of Google Wallet, but has been blocked to prevent <a href="http://www.youtube.com/watch?v=hx5nbkDy6tc" target="_blank">relay attacks</a> and stop Android apps from extracting credit card information from the SE. This leaves us with using the NFC interface if we want to find out more.<br /><br />Most open-source tools for card analysis, such as <a href="https://code.google.com/p/cardpeek/">cardpeek</a> and <a href="http://code.google.com/p/javaemvreader/" target="_blank">Java EMV Reader</a> were initially developed for contact cards, and therefore need a connection to a <a href="http://en.wikipedia.org/wiki/PC/SC" target="_blank">PC/SC</a>-compliant reader to operate. If you have a dual interface reader that provides PC/SC drivers you get this for free, but for a standalone NFC reader we need <a href="http://code.google.com/p/libnfc/" target="_blank">libnfc</a>, <a href="http://code.google.com/p/nfc-tools/source/browse/#svn%2Ftrunk%2Fifdnfc" target="_blank">ifdnfc</a> and <a href="http://pcsclite.alioth.debian.org/pcsclite.html" target="_blank">PCSC lite</a> to complete the PC/SC stack on Linux. Getting those to play nicely together can be a bit tricky, but once it's done card tools work seamlessly. Fortunately, selection via the NFC interface is successful and we can proceed with the next steps in the EMV flow: initiating processing by sending the <code>GET PROCESSING OPTIONS</code> and reading relevant application data using the <code>READ RECORD</code> command. For compatibility reasons, EMV payment applications contain data equivalent to that found on the <a href="http://en.wikipedia.org/wiki/Magnetic_stripe_card#Financial_cards" target="_blank">magnetic stripe</a> of physical cards. This includes account number (PAN), expiry date, service code and card holder name. EMV-compatible POS terminals are required to support transactions based on this data only ('Mag-stripe mode'), so some of it could be available on Google Wallet as well. Executing the needed <code>READ RECORD</code> commands shows that it is indeed found on the SE, and both MasterCard applications are linked to the same mag-stripe data. The data is as usual in TLV format, and relevant tags and format are defined in EMV Book C-2. When parsed it looks like this for the Google prepaid card (slightly masked):<br /><br /><pre>Track 2 Equivalent Data:<br /> Primary Account Number (PAN) - 5430320XXXXXXXX0<br /> Major Industry Identifier = 5 (Banking and financial)<br /> Issuer Identifier Number: 543032 (Mastercard, UNITED STATES OF AMERICA)<br /> Account Number: XXXXXXXX<br /> Check Digit: 0 (Valid)<br /> Expiration Date: Sun Apr 30 00:00:00 GMT+09:00 2017<br /> Service Code - 101:<br /> 1 : Interchange Rule - International interchange OK<br /> 0 : Authorisation Processing - Normal<br /> 1 : Range of Services - No restrictions<br /> Discretionary Data: 0060000000000<br /></pre><br />As you can see, it does not include the card holder name, but all the other information is available, as per the EMV standard. We even get the 'transaction in progress' animation on screen while our reader is communicating with Google Wallet. We can also get the PIN try counter (set to 0, in this case meaning disabled), and a transaction log in the format shown below. We can't verify if the transaction log is used though, since Google Wallet, like a lot of the newer Google services, happens to be limited to the US . <br /><br /><pre>Transaction Log:<br /> Log Format:<br /> Cryptogram Information Data (1 byte)<br /> Amount, Authorised (Numeric) (6 bytes)<br /> Transaction Currency Code (2 bytes)<br /> Transaction Date (3 bytes)<br /> Application Transaction Counter (ATC) (2 bytes)<br /></pre></div><br />This was fun, but it doesn't really show much besides the fact that Google Wallet's virtual card(s) comply with the EMV specifications. What is more interesting is that the controller applet APDU commands that toggle contactless payment and modify the PPSE don't require additional application authentication and can be issued by any app that is whitelisted to use the secure element. The controller applet most probably doesn't store any really sensitive information, but while it allows its state to be modified by third party applications, we are unlikely to see any other app besides Google Wallet whitelsited on production devices. Unless of course more fine-grained SE access control is implemented in Android.<br /><div><h3>Fine-grained SE access control</h3>This fact that Google Wallet state can be modified by third party apps (granted access to the SE, of course) leads us to another major complication with SE access on mobile devices. While the data on the SE is securely stored and access is controlled by the applets that host it, once an app is allowed access, it can easily perform a denial of service attack against the SE or specific SE applications. Attacks can range from locking the whole SE by repeatedly executing failed authentication attempts until the Card Manager is blocked (a GP-compliant card goes into the TERMINATED state usually after 10 unsuccessful tries), to application-specific attacks such as blocking a cardholder verification PIN or otherwise changing a third party applet state. Another more sophisticated, but harder to achieve and possible only on connected devices, attack is a <a href="http://rd.springer.com/chapter/10.1007/978-3-642-30436-1_1">relay attack</a>.&nbsp;In this attack, the phone's Internet connection is used to receive and execute commands sent by another remote phone, enabling the remote device to emulate the SE of the target device without physical proximity. The way to mitigate those attacks is to exercise finer control on what apps that access the SE can do by mandating that they can only select specific applets or only send a pre-approved list of APDUs. This is supported by <a href="http://jcp.org/aboutJava/communityprocess/mrel/jsr177/index.html" target="_blank">JSR-177</a>&nbsp;<i>Security and Trust Servcies API</i> which only allows connection to one specific applet and only grants those to applications with trusted signature (currently implemented in BlackBerry 7 <a href="http://www.blackberry.com/developers/docs/7.0.0api/net/rim/device/api/io/nfc/se/SecureElementManager.html" target="_blank">API</a>).&nbsp;JSR-177 also &nbsp;provides the ability to restrict APDUs by matching them against an APDU mask to determine whether they should be allowed or not. <a href="http://code.google.com/p/seek-for-android/wiki/AccessControlIntroduction" target="_blank">SEEK for Android</a> goes on step further than BlackBerry by supporting fine-grained access control with access policy stored on the SE. The actual format of ACL rules and protocols for managing them are defined in GlobalPlatform <i>Secure Element Access Control</i> standard, which is relatively new (v.1.0 released on May 2012). As we have seen, the current (4.0 and 4.1) stock Android versions do restrict access to the SE to trusted applications by whitlisting their certificates (a hash of those would have probably sufficed) in <code>/etc/nfcee_access.xml</code>, but once an app is granted access it can select any applet and send any APDU to the SE. If third party apps that use the SE are to be allowed in Android, more fine-grained control needs to be implemented by at least limiting the applets SE-whitelisted Android apps can select.<br /><br />Because for most applications the SE is used in conjunction with NFC, and SE app needs to be notified of relevant NFC events such as RF field detection or applet selection via the NFC interface.&nbsp;Disclosure of such events to malicious applications can also potentially lead to denial of service attacks, that is why access to them needs to be controlled as well. The GP SE access control specification allows rules for controlling access to NFC events to be managed along with applet access rules by saving them on the SE. In Android, global events are implemented by using &nbsp;broadcasts and interested applications can create and register a broadcast receiver component that will receive such broadcasts. Broadcast access can be controlled with standard Android signature-based permissions, but that has the disadvantage that only apps signed with the system certificate would be able to receive NFC events, effectively limiting SE apps to those created by the device manufacturer or MNO. Android 4.x therefore uses the same mechanism employed to control SE access -- whitelisting application certificates. Any application registered in <code>nfcee_access.xml</code> can receive the broadcasts listed below. As you can see, besides RF field detection and applet selection, Android offers notifications for higher-level events such as EMV card removal or MIFARE sector access. By adding a broadcast receiver to our test application as shown below, we were able to receive <code>AID_SELECTED</code> and RF field-related broadcasts. <code>AID_SELECTED</code> carries an extra with the AID of the selected applet, which allows us to start a related activity when an applet we support is selected. <code>APDU_RECEIVED</code>&nbsp;is also interesting because it carriers an extra with the received APDU, but that doesn't seem to be sent, at least not in our tests.<br /><br /><pre>&lt;receiver android:name="org.myapp.nfc.SEReceiver" &gt;<br /> &lt;intent-filter&gt;<br /> &lt;action android:name="com.android.nfc_extras.action.AID_SELECTED" /&gt;<br /> &lt;action android:name="com.android.nfc_extras.action.APDU_RECEIVED" /&gt;<br /> &lt;action android:name="com.android.nfc_extras.action.MIFARE_ACCESS_DETECTED" /&gt;<br /> &lt;action android:name="android.intent.action.MASTER_CLEAR_NOTIFICATION" /&gt;<br /> &lt;action android:name="com.android.nfc_extras.action.RF_FIELD_ON_DETECTED" /&gt;<br /> &lt;action android:name="com.android.nfc_extras.action.RF_FIELD_OFF_DETECTED" /&gt;<br /> &lt;action android:name="com.android.nfc_extras.action.EMV_CARD_REMOVAL" /&gt;<br /> &lt;action android:name="com.android.nfc.action.INTERNAL_TARGET_DESELECTED" /&gt;<br /> &lt;/intent-filter&gt;<br />&lt;/receiver&gt;<br /></pre><br /></div><div><h3>Summary</h3>We showed that Google Wallet installs a few applets on the SE when first initialized. Besides the expected EMV payment applets, if makes use of a controller applet for securely storing Wallet state and a MIFARE manager applet for reading/writing emulated card sectors from the app. While we can get some information about the EMV environment by sending commands to the SE from an app, payment applets cannot be selected via the wired SE interface, but only via the contactless NFC interface. Controller applet access is however available to third party apps, as long as they know the relevant APDU commands, which can easily be traced by logging. This might be one of the reasons why third party SE apps are not supported on Android yet. To make third party SE apps possible (besides&nbsp;offering a TSM solution), &nbsp;Android needs to implement more-fined grained access control to the SE, for example by restricting what applets can be selected or limiting the range of allowed APDUs for whitelisted apps.<br /><br /></div>Nikolay Elenkovhttps://plus.google.com/117221066931981967754noreply@blogger.com42tag:blogger.com,1999:blog-2873091912851440312.post-50385329761596698992012-08-25T01:58:00.000+09:002012-09-28T14:36:32.552+09:00Android secure element execution environmentIn the <a href="http://nelenkov.blogspot.jp/2012/08/accessing-embedded-secure-element-in.html" target="_blank">previous post</a>&nbsp;we gave a brief introduction of secure element (SE) support in mobile devices and showed how to communicate with the embedded SE in Android 4.x We'll now proceed to sending some actual command to the SE in order to find out more information about its OS and installed applications. Finally, we will discuss options for installing custom applets on the SE.<br /><h3>SE execution environments</h3><div>The Android SE is essentially a smart card in a different package, so most standards and protocols originally developed for smart cards apply. Let's briefly review the relevant ones.</div><br /><div>Smart cards have traditionally been file system-oriented and the main role of the OS was to handle file access and enforce access permissions. Newer cards support a VM running on top of the native OS that allows for the execution of 'platform independent' applications called applets, which make use of a well defined runtime library to implement their functionality. While different implementations of this paradigm exists, by far the most popular one is the <a href="http://www.oracle.com/technetwork/java/javacard/overview/index.html" target="_blank">Java Card</a> runtime environment (JCRE). Applets are implemented in a restricted version of the Java language and use a <a href="http://javacard.kenai.com/javadocs/classic/" target="_blank">subset</a> of the runtime library, which offers basic classes for I/O, message parsing and cryptographic operations. While the JCRE specification fully defines the applet runtime environment, it does not specify how to load, initialize and delete applets on actual physical cards (tools are only provided for the JCRE emulator). Since one of the main applications of smart cards are various payment services, the application loading and initialization (often referred to as 'card personalization') process needs to be controlled and only authorized entities should be able to alter the card's and installed applications' state. A specification for securely managing applets was originally developed by Visa under the name Open Platform, and is now being maintained and developed by the <a href="http://globalplatform.org/" target="_blank">GlobalPlatform</a>&nbsp;(GP) organization under the name <a href="http://www.globalplatform.org/specificationscard.asp" target="_blank">'GlobalPlatform Card Specification</a>' (GPCS).&nbsp;</div><div><br /></div><div>The Card Specification, as anything developed by a committee, is quite extensive and spans multiple documents. Those are quite abstract at times and make for a fun read, but the gist is that the card has a mandatory Card Manager component (also referred to as the 'Issuer Security Domain') that offers a well defined interface for card and individual application life cycle management. Executing Card Manager operations requires authentication using cryptographic keys saved on the card, and thus only an entity that knows those keys can change the state of the card (one of OP_READY, INITIALIZED, SECURED, CARD_LOCKED or TERMINATED) or manage applets. Additionally the GPCS defines secure communication protocols (called Secure Channel, SC) that besides authentication offer confidentiality and message integrity when communicating with the card.<br /><h3>SE&nbsp;communication protocols</h3>As we showed in the previous post, Android's interface for communicating with the SE is the <code>byte[] transceive(byte[] command)</code> method of the <code>NfcExecutionEnvironment</code> class. The structure of the exchanged messages, called APDUs &nbsp;(Application Protocol Data Unit)&nbsp;is defined in the ISO/IEC 7816-4: <i>Organization, security and commands for interchange</i> standard. The reader (also known as a Card Acceptance Device, CAD) sends command APDUs (sometimes referred to as C-APDUs) to the card, comprised of a mandatory 4-byte header with a command class (CLA), instruction (INS) and two parameters (P1 and P2). This is followed by the optional command data length (Lc), the actual data and finally the maximum number of response bytes expected, if any (Le). The card returns a response APDU (R-APDU) with a mandatory status word (SW1 and SW2) and optional response data. Historically, command APDU data has been limited to 255 bytes and response APDU data to 256 bytes. Recent cards and readers support extended APDUs with data length up to 65536 bytes, but those are not always usable, mostly for various compatibility reasons. The lower level &nbsp;communication between the reader and the card is carried out by one of several transmission protocols, the most widely used ones being T=0 (byte-oriented) and T=1 (block-oriented). Both are defined in ISO 7816-3:<i> Cards with contacts — Electrical interface and transmission protocols</i>. The APDU exchange is not completely protocol-agnostic, because T=0 cannot directly send response data, but only notify the reader of the number of available bytes. Additional command APDUs (<code>GET RESPONSE</code>) need to be sent in order to retrieve the response data.</div><br />The original ISO 7816 standards were developed for contact cards, but the same APDU-based communication model is used for contactless cards as well. It is layered on top of the wireless transmission protocol defined by <a href="http://en.wikipedia.org/wiki/ISO_14443" target="_blank">ISO/IEC 14443-4</a>&nbsp;which behaves much like T=1 for contact cards.<br /><h3>Exploring the Galaxy Nexus SE execution environment</h3><div>With most of the theory out of the way, it is time to get our hands dirty and finally try to &nbsp;communicate with the SE. As mentioned in the previous post, the SE in the Galaxy Nexus is a chip from NXP's <a href="http://mifare.net/files/3013/0079/2103/SmartMX%20Leaflet_Oct10.pdf" target="_blank">SmartMX</a> series. It runs a Java Card-compatible operating system and comes with a GlobalPlatform-compliant Card Manager. Additionally, it offers <a href="http://www.nxp.com/products/identification_and_security/smart_card_ics/mifare_smart_card_ics/mifare_classic/" target="_blank">MIFARE Classic</a> 4K emulation and a <a href="http://mifare4mobile.org/" target="_blank">MIFARE4Mobile</a>&nbsp;<a href="http://mifare.net/products/mifare4mobile1/mifare4mobile-applet/" target="_blank">manager applet</a> that allows for personalization of the emulated MIFARE tag. The MIFARE4Mobile specification is available for free, but comes with a non-disclosure, no-open-source, keep-it-shut <a href="http://mifare4mobile.org/downloads/specifications/" target="_blank">agreement</a>, so we will skip that and focus on the GlobalPlatform implementation.&nbsp;</div><div><br /></div><div>As we already pointed out, authentication is required for most of the Card Manager operations. The required keys are, naturally, not available and controlled by Google and their partners. Additionally, a number of subsequent failed authentication attempts (usually 10) will lock the Card Manager and make it impossible to install or remove applets, so trying out different keys is also not an option (and this is a good thing). However, the Card Manager does provide some information about itself and the runtime environment on the card in order to make it possible for clients to adjust their behaviour dynamically and be compatible with different cards.&nbsp;</div><div><br /></div><div>Since Java Card/GP is a multi-application environment, each application is identified by an AID&nbsp;(Application Identifier), consisting of a 5-byte RID (Registered Application Provider Identifier or Resource Identifier) and up to 11-byte PIX (Proprietary Identifier eXtension). Thus an AID can be from 5 to 16 bytes long. Before being able to send commands to particular applet it needs to be made active by issuing the <code>SELECT</code> (CLA='00', INS='A4') command with its AID. As all applications, the Card Manager is also identified by an AID, so our first step is to find this out. This can be achieved by issuing an empty <code>SELECT</code> which both selects the Card Manager and returns information about the card and the Issuer Security Domain. An empty select is simply a select without an AID specified, so the command becomes: <code>00 A4 04 00 00</code>. Let's see what this produces:<br /><br /><pre>--&gt; 00A4040000<br />&lt;-- 6F658408A000000003000000A5599F6501FF9F6E06479100783300734A06072A86488<br />6FC6B01600C060A2A864886FC6B02020101630906072A864886FC6B03640B06092A86488<br />6FC6B040215650B06092B8510864864020103660C060A2B060104012A026E0102 9000<br /></pre><br />A successful status (<code>0x9000</code>) and a long string of bytes. The format of this data is defined in Chapter 9. <i>APDU Command Reference</i> of the GPCS, and as most things in the smart card world is in <a href="http://en.wikipedia.org/wiki/Type-length-value">TLV</a> (Tag-Length-Value) format. In TLV each unit of data is described by a unique tag, followed by its length in bytes, and finally the actual data. Most structures are recursive, so the data can host another TLV structure, which in turns wraps another, and so on. Parsing this is not terribly hard, but it is not fun either, so we'll borrow some classes from the <a href="http://code.google.com/p/javaemvreader/">Java EMV Reader</a> project to make our job a bit easier. You can see the full code in the <a href="https://github.com/nelenkov/android-se-access" target="_blank">sample project</a>, but parsing the response produces something like this on a Galaxy Nexus: <br /><br /><pre>SD FCI: Security Domain FCI<br /> AID: AID: a0 00 00 00 03 00 00 00<br /> RID: a0 00 00 00 03 (Visa International [US])<br /> PIX: 00 00 00<br /><br /> Data field max length: 255<br /> Application prod. life cycle data: 479100783300<br /> Tag allocation authority (OID): globalPlatform 01<br /> Card management type and version (OID): globalPlatform 02020101<br /> Card identification scheme (OID): globalPlatform 03<br /> Global Platform version: 2.1.1<br /> Secure channel version: SC02 (options: 15)<br /> Card config details: 06092B8510864864020103<br /> Card/chip details: 060A2B060104012A026E0102<br /></pre></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-Rg1T6EbzSms/UDeQOyEvj4I/AAAAAAAAIH8/0yw8miz0X3Q/s1600/gp-info.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://2.bp.blogspot.com/-Rg1T6EbzSms/UDeQOyEvj4I/AAAAAAAAIH8/0yw8miz0X3Q/s400/gp-info.png" width="225" /></a></div><br />This shows as the AID of the Card Manager (<code>A0 00 00 00 03 00 00 00</code>), the version of the GP implementation (2.1.1) and the supported Secure Channel protocol (SC02, implementation option '15', which translates to: 'Initiation mode explicit, C-MAC on modified APDU, ICV set to zero, ICV encryption for CMAC session, 3 Secure Channel Keys') along with some proprietary data about the card configuration. Using the other GP command that don't require authentication, <code>GET DATA</code>, we can also get some information about the number and type of keys the Card Manager uses. The Key Information Template is marked by tag 'E0', so the command becomes <code>80 CA 00 E0 00</code>. Executing it produces another TLV structure which when parsed spells this out: <br /><br /><pre>Key: ID: 1, version: 1, type: DES (EBC/CBC), length: 128 bits<br />Key: ID: 2, version: 1, type: DES (EBC/CBC), length: 128 bits<br />Key: ID: 3, version: 1, type: DES (EBC/CBC), length: 128 bits<br />Key: ID: 1, version: 2, type: DES (EBC/CBC), length: 128 bits<br />Key: ID: 2, version: 2, type: DES (EBC/CBC), length: 128 bits<br />Key: ID: 3, version: 2, type: DES (EBC/CBC), length: 128 bits<br /></pre><br />This means that the Card Manager is configured with two versions of one key set, consisting of 3 double length DES keys (3DES where K3 = K1, aka DESede). The keys are used for authentication/encryption (S-ENC), data integrity (S-MAC) and data encryption (DEK), respectively. It is those keys we need to know in order to be able to install our own applets on the SE. <br /><br />There is other information we can get from the Card Manager, such as the card issuer ID and the card image number, but it is of less interest. It is also possible to obtain information about the card manufacturer, card operating system version and release date by getting the Card Production Life Cycle Data (CPLC). This is done by issuing the <code>GET DATA</code> command with the '9F7F' tag: <code>80 CA 9F 7F 00</code>. However, most of the CPLC data is encoded using proprietary tags and IDs so it is not very easy to read anything but the card serial number. Here's the output from a Galaxy Nexus:<br /><br /><pre>CPLC<br /> IC Fabricator: 4790<br /> IC Type: 5044<br /> Operating System Provider Identifier: 4791<br /> Operating System Release Date: 0078<br /> Operating System Release Level: 3300<br /> IC Fabrication Date: 1017<br /> IC Serial Number: 082445XX<br /> IC Batch Identifier: 4645<br /> IC ModuleFabricator: 0000<br /> IC ModulePackaging Date: 0000<br /> ICC Manufacturer: 0000<br /> IC Embedding Date: 0000<br /> Prepersonalizer Identifier: 1726<br /> Prepersonalization Date: 3638<br /> Prepersonalization Equipment: 32343435<br /> Personalizer Identifier: 0000<br /> Personalization Date: 0000<br /> Personalization Equipment: 00000000<br /></pre><br /><h3>Getting an applet installed on the SE</h3>No, this section doesn't tell you how to recover the Card Manager keys, so if that's what you are looking for, you can skip it. This is mostly speculation about different applet distribution models Google or carriers may (or may not) choose to use to allow third-party applets on their phones.<br /><br />It should be clear by now that the only way to install an applet on the SE is to have access to the Card Manager keys. Since Google will obviously not give up the keys to production devices (unless they decide to scrap Google Wallet), there are two main alternatives for third parties that want to use the SE: 'development' devices with known keys, or some sort of an agreement with Google to have their applets approved and installed via Google's infrastructure. With Nexus-branded devices with an unlockable bootloader available on multiple carriers, as well directly from Google (at least in the US), it is unlikely that dedicated development devices will be sold again. That leaves delegated installation by Google or authorized partners. Let's see how this can be achieved.<br /><br />The need to support multiple applications and load SE applets on mobile devices dynamically has been recognized by GlobalPlatform, and they have come up with, you guessed it, a standard that defines how this can be implemented. It is called <i>Secure Element Remote Application Management</i> and&nbsp;specifies an administration protocol for performing remote management of SE applets on a mobile device. Essentially, it involves securely downloading an applet and necessary provisioning scripts&nbsp;(created by a Service Provider)&nbsp;from an Admin Server, which are then forwarded by an Admin Agent running on the mobile device to the SE. The standard doesn't mandate a particular implementation, but in practice the process is carried out by downloading APDU scripts over HTTPS, which are then sent to the SE using one of the compatible GP secure channel protocols, such as SC02. As we shall see in the next article, a similar, though non-general and proprietary, scheme is already implemented in Google Wallet. If it were generalized to allow the installation of any (approved) applet, it could be used by applications that want to take advantage of the secure element: on first run they could check if the applet is installed, and if not, send a SE provisioning request to the Admin Server. It would then determine the proper Card Manager keys for the target device and prepare the necessary installation scripts. The role of the Admin Agent can be taken by the Google Play app which already has the necessary system permissions to install applications, and would only need to be extended to support SE access and Card Manager communication. As demonstrated by Google Wallet, this is already technologically possible. The difficulties for making it generally available are mostly contractual and/or political.<br /><br />Since not all NFC-enabled phones with an embedded SE are produced or sold by Google, different vendors will control their respective Card Manager keys, and thus the Admin Server will need to know all of those in order to allow applet installation on all compatible devices. If UICCs are supported as a SE, this would be further complicated by the addition of new players: MNOs. Furthermore, service providers that deal with personal and/or financial information (pretty much all of the ones that matter do) require compliance with their own security standards, and that makes the job of the entity providing the Admin Server that much harder. The proposed solution to this is a neutral broker entity, called a Trusted Service Manager (<a href="http://en.wikipedia.org/wiki/Trusted_service_manager" target="_blank">TSM</a>), that sets up both the required contractual agreements with all parties involved and takes care of securely distributing SE applications to supported mobile devices. The idea was originally introduced by the GSM Association a few years ago, and companies that offer TSM services exist today (most of those were already in the credit card provisioning business). <a href="http://www.rim.com/" target="_blank">RIM</a> also provides a TSM service for their BlackBerries, but they have the benefit of being the manufacturer of all supported devices.<br /><br />To sum this up: the only viable way of installing applets on the SE on commercial devices is by having them submitted to and delivered by a distribution service controlled by the device vendor or provided by a third-party TSM. Such a (general purpose) service is not yet available for Android, but is entirely technologically possible. If NFC payments and ticketing using Android do take off, more companies will want to jump on the bandwagon and contactless application distribution services will naturally follow, but this is sort of a chicken-and-egg problem. Even after they do become available, they will most likely deal only with major service providers such as credit card or transportation companies. <i>Update</i>: It seems Google's plan is to let third parties install their transport cards, loyalty cards, etc on the SE, but all under the <a href="http://www.theverge.com/2012/8/28/3273784/google-to-open-wallet-app-to-third-party-passes-loyalty-cards-and-ids">Google Wallet umbrella</a>, so a general purpose TSM might not be an option, at least for a while. <br /><br />A more practical alternative for third-party developers is software card emulation. In this mode, the emulated card is not on a SE, but is actually implemented as a regular Android app. Once the NFC chip senses an external reader, it forwards communication to a registered app, which processes it and returns a response which the NFC chip simply relays. This obviously doesn't offer the same security as an SE, but comes with the advantage of not having to deal with MNOs, vendors or TSMs. This mode is not available in stock Android (and is unlikely to make it in the mainstream), but has been <a href="http://r.cyanogenmod.com/#/q/status:merged+owner:doug,n,z" target="_blank">integrated</a> into CyanogenMod and there are already <a href="http://www.cyanogenmod.com/blog/cyanogenmod9-1-and-simplytapp" target="_blank">commercial services</a> that use it. For more info on the security implications of software card emulation, see this excellent <a href="http://www.medien.ifi.lmu.de/iwssi2012/papers/iwssi-spmu2012-roland.pdf" target="_blank">paper</a>.<br /><h3>Summary</h3>We showed that the SE in recent Android phones offers a Java Card-compatible execution environment and implements GlobalPlatform specifications for card and applet management. Those require authentication using secret keys for all operations that change the card state. Because the keys for Android's SE are only available to Google and their partners, it is currently impossible for third parties to install applets on the SE, but that could change if general purpose TSM services targeting Android devices become available.<br /><br />The <a href="http://nelenkov.blogspot.jp/2012/08/exploring-google-wallet-using-secure.html" target="_blank">final par</a>t of the series will look into the current <a href="https://play.google.com/store/apps/details?id=com.google.android.apps.walletnfcrel" target="_blank">Google Wallet</a>&nbsp;implementation and explore how it makes use of the SE.<br /><div></div><div></div><div></div>Nikolay Elenkovhttps://plus.google.com/117221066931981967754noreply@blogger.com9tag:blogger.com,1999:blog-2873091912851440312.post-31646608116303219102012-08-22T22:47:00.002+09:002012-09-11T15:16:41.298+09:00Accessing the embedded secure element in Android 4.x<after a="a" discussing="discussing" href="http://nelenkov.blogspot.jp/2012/07/jelly-bean-hardware-backed-credential.html" improved="improved" target="_blank" the="the">After discussing credential storage</after> and Android's&nbsp;<a href="http://nelenkov.blogspot.jp/2012/08/changing-androids-disk-encryption.html" target="_blank">disk encryption</a>, we'll now look at another way to protect your secrets: the embedded secure element (SE) found in recent devices. In the first post of this three part series we'll give some background info about the SE and show how to use the SE communication interfaces Android 4.x offers. In the <a href="http://nelenkov.blogspot.jp/2012/08/android-secure-element-execution.html" target="_blank">second part</a> we'll try sending some actual commands in order to find out more about the SE execution environment. <a href="http://nelenkov.blogspot.jp/2012/08/exploring-google-wallet-using-secure.html" target="_blank">Finally</a> we will discuss Google Wallet and how it makes use of the SE.<br /><h3>What is a Secure Element and why do you want one?&nbsp;</h3>A Secure Element (SE) is a tamper resistant&nbsp;<a href="http://en.wikipedia.org/wiki/Smart_card" target="_blank">smart card</a> chip capable of running smart card applications (called applets or cardlets) with a certain level of security and features. A smart card is essentially a minimalistic computing environment on single chip, complete with a CPU, ROM, EEPROM, RAM and I/O port. Recent cards also come equipped with cryptographic co-processors implementing common algorithms such as DES, AES and RSA. Smart cards use various techniques to implement <a href="http://en.wikipedia.org/wiki/Tamper_resistant" target="_blank">tamper resistance</a>, making it quite hard to extract data by disassembling or analyzing the chip. They come pre-programmed with a&nbsp; multi-application&nbsp;OS that takes advantage of the hardware's memory protection features to ensure that each application's data is only available to itself. Application installation and (optionally) access is controlled by requiring the use of cryptographic keys for each operation.<br /><div><br /></div><div>The SE can be integrated in mobile devices in various form factors: <a href="http://en.wikipedia.org/wiki/UICC" target="_blank">UICC</a> (commonly known as a SIM card), embedded in the handset or connected to a SD card slot.&nbsp;If the device supports &nbsp;<a href="http://en.wikipedia.org/wiki/Near_field_communication" target="_blank">NFC</a>&nbsp;the SE is usually connected to the NFC chip, making it possible to communicate with the SE wirelessly.&nbsp;</div><div><br /></div><div>Smart cards have been around for a while and are now used in applications ranging from pre-paid phone calls and transit ticketing to credit cards and VPN credential storage. Since an SE installed in a mobile device has equivalent or superior capabilities to that of a smart card, it can theoretically be used for any application physical smart cards are currently used for. Additionally, since an SE can host multiple applications, it has the potential to replace the bunch of cards people use daily with a single device. Furthermore,&nbsp;because&nbsp;the SE can be controlled by the device's OS, access to it can be restricted by requiring additional authentication (PIN or passphrase) to enable it.&nbsp;</div><div><br /></div><div>So a SE is obviously a very useful thing to have and with a lot of potential, but why would you want to access one from your apps? Aside from the obvious payment applications, which you couldn't realistically build unless you own a bank and have a contract with <a href="http://corporate.visa.com/" target="_blank">Visa</a> and friends, there is the possibility of storing other cards you already have (access cards, loyalty cards, etc.) on your phone, but that too is somewhat of a gray area and may requiring contracting the relevant issuing entities. The main application for third party apps would be implementing and running a critical part of the app, such as credential storage or license verification inside the SE to guarantee that it is impervious to reversing and cracking. Other apps that can benefit from being implemented in the SE are One Time Password (OTP) generators and, of course PKI credential (i.e., private keys) storage. While implementing those apps is possible today with standard tools and technologies, using them in practice on current commercial Android devices is not that straightforward. We'll discuss this in detail the second part of the series, but let's first explore the types of SEs available on mobile devices, and the level of support they have in Android.&nbsp;</div><div><h3>Secure Element form factors in mobile devices</h3></div><div>As mentioned in the previous section, SEs come integrated in different flavours: as an&nbsp;UICC, embedded or as plug-in cards for an SD card slot. This post is obviously about the embedded SE, but let's briefly review the rest as well.&nbsp;</div><div><br /></div><div>Pretty much any mobile device nowadays has an&nbsp;UICC (aka SIM card, although it is technically a SIM only when used on GSM networks) of some form or another. UICCs are actually smart cards that can host applications, and as such are one form of a SE. However, since the UICC is only connected to the basedband processor, which is separate from the application processor that runs the main device OS, they cannot be accessed directly from Android. All communication needs to go through the Radio Interface Layer (RIL) which is essentially a proprietary IPC interface to the baseband. Communication to the UICC SE is carried out using special extended AT commands (<code>AT+CCHO</code>, <code>AT+CCHC</code>, <code>AT+CGLA</code> as defined by <a href="http://www.3gpp.org/ftp/Specs/html-info/27007.htm" target="_blank">3GPP TS 27.007</a>), which the current Android telephony manager does not support. The <a href="http://code.google.com/p/seek-for-android/" target="_blank">SEEK for Android</a> project provides patches that do implement the needed commands, allowing for communicating with the UICC via their standard <a href="http://seek-for-android.googlecode.com/svn/trunk/doc/index.html">SmartCard API</a>, which is a&nbsp;reference implementation of the <a href="http://www.simalliance.org/" target="_blank">SIMalliance</a> <a href="http://www.simalliance.org/en/about/workgroups/open_mobile_api_working_group/" target="_blank">Open Mobile API</a> specification. However, as most components that talk directly to the hardware in Android, the RIL consists of an open source part (<code>rild</code>), and a proprietary library (<code>libXXX-ril.so</code>). In order to support communication with the UICC secure element, support for this needs to be added to both to <code>rild</code> and to the underlying proprietary library, which is of course up to hardware vendors. The SEEK project does provide a patch that lets the emulator talk directly to a UICC in an external PC/SC reader, but that is only usable for experiments. While there is some talk of integrating this functionality into stock Android (there is even an empty <code>packages/apps/SmartCardService</code> directory in the AOSP tree), there is currently no standard way to communicate with the UICC SE through the RIL (some commercial devices with custom firmware are <a href="http://code.google.com/p/seek-for-android/wiki/DeviceDetails">reported</a> to support it though).<br /><br />An alternative way to use the UICC as a SE is using the Single Wire Protocol (<a href="http://en.wikipedia.org/wiki/Single_Wire_Protocol" target="_blank">SWP</a>) when the UICC is connected to a NFC controller that supports it. This is the case in the Nexus S, as well as the Galaxy Nexus, and while this functionality is supported by the NFC controller drivers, it is disabled by default. This is however a software limitation, and people have managed to <a href="http://forum.xda-developers.com/showthread.php?t=1281946" target="_blank">patch</a> AOSP source to get around it and successfully communicate with UICC. This has the greatest potential to become part of stock Android, however, as of the current release (4.1.1), it is still not available.&nbsp;</div><div><br /></div><div>Another form factor for an SE is an Advanced Security SD card (<a href="https://www.sdcard.org/developers/overview/ASSD/" target="_blank">ASSD</a>), which is basically an SD card with an embedded SE chip. When connected to an Android device with and SD card slot, running a SEEK-patched Android version, the SE can be accessed via the SmartCard API. However, Android devices with an SD card slot are becoming the exceptions rather than the norm, so it is unlikely that&nbsp;ASSD&nbsp;Android support will make it to the mainstream.<br /><br />And finally, there is the embedded SE. As the name implies, an embedded SE is part of the device's mainboard, either as a dedicated chip or integrated with the NFC one, and is not removable. The first Android device to feature an embedded SE was the Nexus S, which also introduced NFC support to Android. Subsequent Nexus-branded devices, as well as other popular handsets have continued this trend. The device we'll use in our experiments, the Galaxy Nexus, is&nbsp;<a href="http://www.ifixit.com/Teardown/Samsung-Galaxy-Nexus-Teardown/7182/2" target="_blank">built</a>&nbsp;with NXP's&nbsp;<a href="http://www.nxp.com/news/press-releases/2011/11/nxp-nfc-solution-implemented-in-galaxy-nexus-from-google.html" target="_blank">PN65N</a>&nbsp;chip, which bundles a NFC radio controller and an SE (<a href="http://www.classic.nxp.com/acrobat_download2/other/identification/SFS107710.pdf" target="_blank">P5CN072</a>, part of NXP's&nbsp;<a href="http://mifare.net/files/3013/0079/2103/SmartMX%20Leaflet_Oct10.pdf" target="_blank">SmartMX</a>&nbsp;series) in a single package (a diagram can be found <a href="http://www.nfc.cc/technology/nxp-nfc-chips/" target="_blank">here</a>). <br /><h3>NFC and the Secure Element</h3></div><div>NFC and the SE are tightly integrated in Android, and not only because they share the same silicon, so let's say a few words about NFC. NFC has three standard modes of operation:&nbsp;</div><div><ul><li>reader/writer (R/W) mode, allowing for accessing external NFC tags&nbsp;</li><li>peer-to-peer (P2P) mode, allowing for data exchange between two NFC devices&nbsp;</li><li>card emulation (CE) mode, which allows the device to emulate a traditional contactless smart card&nbsp;</li></ul><div>What can Android do in each of these modes? The R/W mode allows you to read NDEF tags and &nbsp;contactless cards, such as some transport cards. While this is, of course, useful, it essential turns your phone into a glorified card reader. P2P mode&nbsp;has been the most demoed and marketed one, in the form of <a href="http://developer.android.com/guide/topics/connectivity/nfc/nfc.html#p2p" target="_blank">Android Beam</a>. This is only cool the first couple of times though, and since the API only gives you higher-level access to the underlying P2P communication protocol, its applications are currently limited. CE was not available in the initial Gingerbread release, and was introduced later in order to support <a href="http://www.google.com/wallet/" target="_blank">Google Wallet</a>. This is the NFC mode with the greatest potential for real-life applications. It allows your phone to be programmed to emulate pretty much any physical contactless card, considerably slimming down your physical wallet in the process.<br /><br />The embedded SE is connected to the NFC controller through a SignalIn/SignalOut Connection (S2C, standardized as <a href="http://www.ecma-international.org/publications/files/ECMA-ST/ECMA-373.pdf" target="_blank">NFC-WI</a>) and has&nbsp;three modes of operation: off, wired and virtual mode. In off mode there is no&nbsp;communication with the SE. In wired mode the SE&nbsp;is visible to the Android OS as if it were a&nbsp;contactless smartcard connected to the RF reader. In virtual mode&nbsp;the SE is visible to external readers as if the phone&nbsp;were a contactless smartcard. These modes are naturally mutually exclusive, so we can communicate with the SE either via the contactless interface (e.g., from an external reader), or through the wired interface (e.g., from an Android app). This post will focus on using the wired mode to communicate with the SE from an app. Communicating via NFC is no different than reading a physical contactless card and we'll touch on it briefly in the last post of the series.<br /><h3>Accessing the embedded Secure Element</h3><div>This is a lot of (useful?) information, but we still haven't answered the main question of this entry: how can we access the embedded SE? The bad news is that there is no public Android SDK&nbsp;API for this (yet). The good news is that accessing it in a standard and (somewhat) officially supported way is possible in current Android versions.</div><div><br /></div></div></div><div>Card emulation, and consequently, internal APIs for accessing the embedded SE were introduced in Android 2.3.4, and that is the version Google Wallet launched on. Those APIs were, and remain, hidden from SDK applications. Additionally using them required system-level permissions (<code>WRITE_SECURE_SETTINGS</code> or <code>NFCEE_ADMIN</code>) in 2.3.4 and subsequent Gingerbread releases, as well as in the initial Ice Cream Sandwich release (4.0, API Level 14). What this means is that only Google (for Nexus) devices, and mobile vendors (for everything else) could distribute apps that use the SE, because they need to either be part of the core OS, or be signed with the platform keys, controlled by the respective vendor. Since the only app that made use of the SE was Google Wallet, which ran only on Nexus S (and initially on a single carrier), this was good enough. However, it made it impossible to develop and distribute an SE app without having it signed by the platform vendor. Android 4.0.4 (API Level 15) changed that by replacing the system-level permission requirement with signing certificate (aka, 'signature' in Android framework terms) whitelisting at the OS level. While this still requires modifying core OS files, and thus vendor cooperation, there is no need to sign SE applications with the vendor key, which greatly simplifies distribution. Additionally, since the whiltelist is maintained in a file, it can easily be updated using an OTA to add support for more SE applications.<br /><br />In practice this is implemented by the <code>NfceeAccessControl</code> class and enforced by the system <code>NfcService</code>. <code>NfceeAccessControl</code> reads the whilelist from <code>/etc/nfcee_access.xml</code> which is an XML file that stores a list of signing certificates and package names that are allowed to access the SE. Access can be granted both to all apps signed by a particular certificate's private key (if no package is specified), or to a single package (app) only. Here's how the file looks like:</div><div><br /></div><pre>&lt;?xml version="1.0" encoding="utf-8"?&gt;<br />&lt;resources xmlns:xliff="urn:oasis:names:tc:xliff:document:1.2"&gt;<br /> &lt;signer android:signature="30820...90"&gt;<br /> &lt;package android:name="org.foo.nfc.app"&gt;<br /> &lt;/package&gt;&lt;/signer&gt;<br />&lt;/resources&gt;<br /></pre><br /><div>This would allow SE access to the 'org.foo.nfc.app' package, if it is signed by the specified signer. So the first step to getting our app to access the SE is adding its signing certificate and package name to the <code>nfcee_access.xml</code> file. This file resides on the system partition (<code>/etc</code> is symlinked to <code>/system/etc</code>), so we need root access in order to remount it read-write and modify the file. The stock file already has the Google Wallet certificate in it, so it is a good idea to start with that and add our own package, otherwise Google Wallet SE access would be disabled. The 'signature' attribute is a hex encoding of the signing certificate in DER format, which is a pity since that results in an excessively long string (a hash of the certificate would have sufficed) . We can either add a &lt;debug/&gt; element to the file, install it, try to access the SE and get the string we need to add from the access denied exception, or simplify the process a bit by preparing the string in advance. We can get the certificate bytes in hex format with a command like this:<br /><br /><pre>$ keytool -exportcert -v -keystore my.keystore -alias my_signing_key \<br />-storepass password|xxd -p -|tr -d '\n'<br /></pre><br />This will print the hex string on a single line, so you might want to redirect it to a file for easier copying. Add a new <code>&lt;signer&gt;</code> element to the stock file, add your app's package name and the certificate hex string, and replace the original file in <code>/etc/</code> (backups are always a good idea). You will also need to reboot the device for the changes to take effect, since file is only read when the <code>NfcService starts</code>. <br /><br />As we said, there are no special permissions required to access the SE in ICS (4.0.3 and above) and Jelly Bean (4.1), so we only need to add the standard <code>NFC</code> permission to our app's manifest. However, the library that implements SE access is marked as optional, and to get it loaded for our app, we need to mark it as required in the manifest with the <code>&lt;uses-library&gt;</code> tag. The <code>AndroidManifest.xml for the app</code> should look something like this:<br /><br /><pre>&lt;manifest xmlns:android="http://schemas.android.com/apk/res/android"<br /> package="org.foo.nfc.app"<br /> android:versionCode="1"<br /> android:versionName="1.0" &gt;<br /> &lt;uses-sdk<br /> android:minSdkVersion="15"<br /> android:targetSdkVersion="16" /&gt;<br /><br /> &lt;uses-permission android:name="android.permission.NFC" /&gt;<br /><br /> &lt;application<br /> android:icon="@drawable/ic_launcher"<br /> android:label="@string/app_name"<br /> android:theme="@style/AppTheme" &gt;<br /> &lt;activity<br /> android:name=".MainActivity"<br /> android:label="@string/title_activity_main" &gt;<br /> &lt;intent-filter&gt;<br /> &lt;action android:name="android.intent.action.MAIN" /&gt;<br /> &lt;category android:name="android.intent.category.LAUNCHER" /&gt;<br /> &lt;/intent-filter&gt;<br /> &lt;/activity&gt;<br /><br /> &lt;uses-library<br /> android:name="com.android.nfc_extras"<br /> android:required="true" /&gt;<br /> &lt;/application&gt;<br />&lt;/manifest&gt;<br /></pre><br />With the boilerplate out of the way it is finally time to actually access the SE API. Android doesn't currently implement a standard smart card communication API such as <a href="http://docs.oracle.com/javame/config/cldc/opt-pkgs/api/security/satsa-api/jsr177/index.html" target="_blank">JSR 177</a>&nbsp;or the <a href="http://www.simalliance.org/en/about/workgroups/open_mobile_api_working_group/" target="_blank">Open Mobile API</a>, but instead offers a very basic communication interface in the <code>NfcExecutionEnvironment</code> (NFC-EE) class. It has only three public methods:<br /><br /><pre>public class NfcExecutionEnvironment {<br /> public void open() throws IOException {...}<br /><br /> public void close() throws IOException {...}<br /><br /> public byte[] transceive(byte[] in) throws IOException {...}<br />}<br /></pre><br />This simple interface is sufficient to communicate with the SE, so now we just need to get access to an instance. This is available via a static method of the <code>NfcAdapterExtras</code> class which controls both card emulation route (currently only to the SE, since UICC support is not available) and NFC-EE management. So the full code to send a command to the SE becomes:<br /><br /><pre>NfcAdapterExtras adapterExtras = NfcAdapterExtras.get(NfcAdapter.getDefaultAdapter(context));<br />NfcExecutionEnvironment nfceEe = adapterExtras.getEmbeddedExecutionEnvironment();<br />nfcEe.open();<br />byte[] response = nfcEe.transceive(command);<br />nfcEe.close();<br /></pre><br />As we mentioned earlier however, <code>com.android.nfc_extras</code> is an optional package and thus not part of the SDK. We can't import it directly, so we have to either build our app as part of the full Android source (by placing it in <code>/packages/apps/</code>), or resort to reflection. Since the SE interface is quite small, we opt for ease of building and testing, and will use reflection. The code to get, open and use an NFC-EE instance now degenerates to something like this:<br /><br /><pre>Class nfcExtrasClazz = Class.forName("com.android.nfc_extras.NfcAdapterExtras");<br />Method getMethod = nfcExtrasClazz .getMethod("get", Class.forName("android.nfc.NfcAdapter"));<br />NfcAdapter adapter = NfcAdapter.getDefaultAdapter(context);<br />Object nfcExtras = getMethod .invoke(nfcExtrasClazz, adapter);<br /><br />Method getEEMethod = nfcExtras.getClass().getMethod("getEmbeddedExecutionEnvironment", <br /> (Class[]) null);<br />Object ee = getEEMethod.invoke(nfcExtras , (Object[]) null);<br />Class eeClazz = se.getClass();<br />Method openMethod = eeClazz.getMethod("open", (Class[]) null);<br />Method transceiveMethod = ee.getClass().getMethod("transceive",<br /> new Class[] { byte[].class });<br />Method closeMethod = eeClazz.getMethod("close", (Class[]) null);<br /><br />openMethod.invoke(se, (Object[]) null);<br />Object response = transceiveMethod.invoke(se, command);<br />closeMethod.invoke(se, (Object[]) null);<br /></pre><br />We can of course wrap this up in a prettier package, and we will in the second part of the series. What is important to remember is to call <code>close()</code> when done, because wired access to the SE blocks contactless access while the NFC-EE is open.&nbsp;We should now have a working connection to the embedded SE and sending some bytes should produce a (error) response. Here's a first try:<br /><br /><pre>D/SEConnection(27318): --&gt; 00000000<br />D/SEConnection(27318): &lt;-- 6E00<br /></pre><div><br /></div><br />We'll explain what the response means and show how to send some actually meaningful commands in the second part of the article. <br /><h3>Summary</h3>A secure element is a tamper resistant execution environment on a chip that can execute applications and store data in a secure manner. An SE is found on the UICC of every Android phone, but the platform currently doesn't allow access to it. Recent devices come with NFC support, which is often combined with an embedded secure element chip, usually in the same package. The embedded secure element can be accessed both externally via a NFC reader/writer (virtual mode) or internally via the <code>NfcExecutionEnvironment</code> API (wired mode). Access to the API is currently controlled by a system level whitelist of signing certificates and package names. Once an application is whitelisted, it can communicate with the SE without any other special permissions or restrictions.</div>Nikolay Elenkovhttps://plus.google.com/117221066931981967754noreply@blogger.com39tag:blogger.com,1999:blog-2873091912851440312.post-37290856005693153242012-08-03T00:53:00.001+09:002013-06-12T14:42:42.059+09:00Changing Android's disk encryption passwordWe've been discussing some of Jelly Bean's <a href="http://nelenkov.blogspot.jp/2012/07/using-app-encryption-in-jelly-bean.html" target="_blank">new</a> <a href="http://nelenkov.blogspot.jp/2012/07/jelly-bean-hardware-backed-credential.html" target="_blank">security</a> <a href="http://nelenkov.blogspot.jp/2012/07/certificate-blacklisting-in-jelly-bean.html" target="_blank">features</a>, but this post will take a few steps back and focus on an older one that has been available since Honeycomb (3.0), announced in the beginning of the now distant 2011: disk encryption. We'll glance over the implementation, discuss how passwords are managed and introduce a simple tool that lets you change the password from the comfort of Android's UI.<br /><br /><h3>Android disk encryption implementation</h3>Android 3.0 <a href="http://developer.android.com/about/versions/android-3.0-highlights.html" target="_blank">introduced</a> disk encryption along with device administrator policies that can enforce it, and advertised it as one of several 'enhancements for the enterprise'. Of course Honeycomb tablets never really took off, let alone in the enterprise. Disk encryption however persevered and &nbsp;has been available in all subsequent versions. Now that ICS is on <a href="http://developer.android.com/about/dashboards/index.html" target="_blank">about 16%</a> of all Android devices and Jelly Bean's share will start to increase as well in the coming months, disk encryption might finally see wider adoption.<br /><br />Unlike most internal Android features, disk encryption has actually been publicly documented&nbsp;quite extensively, so if you are interested in the details, do read the <a href="http://source.android.com/tech/encryption/android_crypto_implementation.html" target="_blank">implementation notes</a>. We'll only give a short overview here, focusing on key and password management.<br /><br />Android's disk encryption makes use of <a href="http://www.saout.de/misc/dm-crypt/" target="_blank">dm-crypt</a>, which is now the standard disk encryption sybsystem in the Linux kernel. <code>dm-crypt</code> maps an&nbsp;encrypted&nbsp;physical block device to a logical plain text one and all reads and writes to it are decrypted/encrypted transparently. The encryption mechanism used for the filesystem in Android is 128 AES with CBC and <a href="http://en.wikipedia.org/wiki/Disk_encryption_theory#Encrypted_salt-sector_initialization_vector_.28ESSIV.29" target="_blank">ESSIV:SHA256</a>. The master key is encrypted with another 128 bit AES key, derived from a user-supplied password using 2000 rounds of <a href="http://en.wikipedia.org/wiki/PBKDF2" target="_blank">PBKDF2</a> with a 128 bit random salt. The resulting encrypted master key and the salt used in the derivation process are stored, along with other metadata, in a footer structure at the end of the encrypted partition (last 16 Kbytes). This allows for changing the decryption password quickly, since the only thing that needs to be re-encrypted with the newly derived key is the master key (16 bytes).<br /><br />The user-mode part of disk encryption is implemented in the <code>cryptfs</code> module of Android's volume daemon (<code>vold</code>). <code>crypfs</code> has commands for both creating and mounting an encrypted partition, as well as for verifying and changing the master key encryption password. Android system services communicate with <code>cryptfs</code> by sending commands to <code>vold</code> through a local socket, and it in turn sets system properties that describe the current state of the encryption or mount process. This results in a fairly complex boot procedure, described in detail in the <a href="http://source.android.com/tech/encryption/android_crypto_implementation.html" target="_blank">implementation notes</a>. We are however, more interested in how the encryption password is set and managed.<br /><br /><h3>Disk encryption password</h3><div>When you first encrypt the device, you are asked to either confirm your device unlock PIN/password or set one if you haven't already, or are using the pattern screen lock. This password or PIN is then used to derive the master key encryption key, and you are required to enter it each time you boot the device, then once more to unlock the screen after it starts. As you can see from the screenshot below, Android doesn't have a dedicated setting to manage the encryption password once the device is encrypted: changing the screen lock password/PIN will also silently change the device encryption password.</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-bh0szMSEmjY/UBonk__BelI/AAAAAAAAH1k/gHLCfXgYqO0/s1600/security-settings.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://2.bp.blogspot.com/-bh0szMSEmjY/UBonk__BelI/AAAAAAAAH1k/gHLCfXgYqO0/s400/security-settings.png" width="225" /></a></div><div><br /></div><div>This is most probably a usability-driven decision: most users would be confused by having to remember and enter two different passwords, at different times, and would probably quickly forget the less often used one (for disk encryption). While this design is good for usability, it effectively forces you to use a simple disk encryption password, since you have to enter it each time you unlock the device, usually dozens of times a day. No one would enter a complex password that many times, and thus most users opt for a simple numeric PIN. Additionally, passwords are limited to 16 characters, so using a passphrase is not an option.</div><div><br /></div><div>So what's the problem with this? After all, to get to the data on the phone you need to guess the screen unlock password anyway, so why bother with a separate one for disk encryption? Because the two passwords protect your phone against two different types of attack. Most screen lock attacks would be online, brute force ones: essentially someone trying out different passwords on a running device after they get brief access to it. After a few unsuccessful attempts, Android will lock the screen for a few minutes (rate-limiting), then if more failed unlock attempts ensue, completely lock (requiring Google account authentication to unlock) or even wipe the device. Thus even a relatively short screen lock PIN offers adequate protection in most cases. Of course, if someone has physical access to the device or a disk image of it, they can extract password hashes and crack them offline without worrying about rate-limiting or device wiping. This in fact, is the scenario that full disk encryption is designed to protect from: once a device is stolen or confiscated for some reason, the attacker can either brute force the actual device, or copy its data and analyze it even after the device is returned or disposed of. As we mentioned in the previous section, the encrypted master key is stored on disk, and if the password used to derive its encryption key is based on a short numeric PIN, it can be brute forced in seconds, or at worst, minutes. This <a href="https://viaforensics.com/mobile-security/droid-gaining-access-android-user-data.html" target="_blank">presentation</a> by <a href="https://viaforensics.com/" target="_blank">viaForensics</a> details one such attack (slides 25-27) and shows that this is far from theoretical and can be achieved with readily <a href="https://viaforensics.com/viaextract/viaextract-includes-android-encryption-cracking.html">available tools</a>. A remote wipe solution could prevent this attack by deleting the master key, which only takes a second and renders the device useless, but this is often not an option, since the device might be offline or turned off.</div><div><br /></div><div>Hopefully we've established that having a strong disk encryption password is a good idea, but how can we set one without making screen unlocking unusable?</div><div><br /></div><h3>Changing the disk encryption password</h3><div>As we mentioned in the first section, Android services communicate with the <code>cryptfs</code> module by sending it commands through a local socket. This is of course limited to system applications, but Android comes with a small utility command that can directly communicate with <code>vold</code> and can be used from a root shell. So as long as your phone is rooted, i.e., you have a SUID <code>su</code> binary installed, you can send the following <code>cryptfs</code> command to change the disk encryption password:</div><div><br /></div><div><pre>$ su -c vdc cryptfs changepw newpass<br />su -c vdc cryptfs changepw newpass<br />200 0 0<br /></pre><br />This doesn't affect the screen unlock password/PIN in any way, and doesn't impose any limits on password length, so you are free to set a complex password or passphrase. The downside is that if you change the screen unlock password, the device encryption one will be automatically changed as well and you will need to repeat the procedure. This is not terribly difficult, but can be cumbersome, especially if you are on the go. You should definitely start this Android <a href="http://code.google.com/p/android/issues/detail?id=29468" target="_blank">issue</a>&nbsp;to have it integrated in Android's system UI (which will probably require extending the <a href="http://developer.android.com/reference/android/app/admin/DevicePolicyManager.html" target="_blank">device policy</a> as well), but in the meantime you can use my <a href="https://play.google.com/store/apps/details?id=org.nick.cryptfs.passwdmanager" target="_blank">Cryptfs Password</a> tool to easily change the device encryption password.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-sQu1E-_0v1Q/UBo6Jq9urlI/AAAAAAAAH14/Ybt8D3l8tj0/s1600/main.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://4.bp.blogspot.com/-sQu1E-_0v1Q/UBo6Jq9urlI/AAAAAAAAH14/Ybt8D3l8tj0/s400/main.png" width="225" /></a></div><br />The app tries to make the process relatively foolproof by first checking your current password and then displaying the new one in a dialog if the change succeeds. However, you will only be required to use the new password at the next boot, so it is important not to forget it until then, and take a full backup just in case. Short of brute-forcing, the only way to recover from a forgotten encryption password is to factory reset the device, deleting all user data in the process, so proceed with caution. The app will verify that you have root access by checking if you have one of the more popular 'superuser' apps (<a href="https://play.google.com/store/apps/details?id=com.noshufou.android.su" target="_blank">Superuser</a> or <a href="https://play.google.com/store/apps/details?id=eu.chainfire.supersu" target="_blank">SuperSU</a>) installed, and trying to execute a dummy command with <code>su</code> at startup. If your device is not encrypted, it will refuse to start.<br /><br />The implementation is quite straightforward: it simply invokes the <code>verifypw</code> and <code>changepw</code> <code>cryptfs</code> command using the passwords you provided. If you are interested in the details, or simply won't let a random app mess with your device encryption password, <a href="https://github.com/nelenkov/cryptfs-password-manager" target="_blank">clone the code</a> and build it yourself. If you are the more trusting kind, you can install via <a href="https://play.google.com/store/apps/details?id=org.nick.cryptfs.passwdmanager" target="_blank">Google Play</a>.<br /><h3>Summary</h3></div><div>While Android's disk encryption is a useful security feature without any (currently) know flaws, its biggest weakness is that it requires you to use the device unlock PIN or password to protect the disk encryption key. Since those are usually rather short, this opens to door to practical brute force attacks against encrypted volumes. Setting a separate, more complex disk encryption password using the provided tool (or the directly with the <code>vdc</code> command) makes those attacks far less effective. This does currently require root access however, so you also need to make sure that your device is otherwise secured as well, mainly by relocking the bootloader, as described in <a href="http://pof.eslack.org/2012/07/30/fortifying-a-galaxy-nexus-with-stock-ish-image-and-root-access/" target="_blank">this article</a>.&nbsp;</div>Nikolay Elenkovhttps://plus.google.com/117221066931981967754noreply@blogger.com69tag:blogger.com,1999:blog-2873091912851440312.post-65802485396331633932012-07-27T23:26:00.000+09:002012-07-28T16:21:23.278+09:00Certificate blacklisting in Jelly BeanThe last two posts introduced <a href="http://nelenkov.blogspot.jp/2012/07/using-app-encryption-in-jelly-bean.html" target="_blank">app encryption</a>, the new <a href="http://nelenkov.blogspot.jp/2012/07/jelly-bean-hardware-backed-credential.html" target="_blank">system key store</a> and a few other security related features introduced in Jelly Bean. Browsing the ASOP code reveals another new feature which sits higher in the security stack than the previously discussed ones -- certificate blacklisting. In this article we will present some details about its implementation and introduce a sample app that allows us to test how blacklisting works in practice. <br /><h3> <span style="background-color: white;">Why blacklist certificates?</span></h3><div>In a perfect world, a working&nbsp;<a href="http://en.wikipedia.org/wiki/Public_key_infrastructure" target="_blank">Public Key Infrastructure</a> (PKI) takes care of issuing, distributing and revoking certificates as necessary. All that a system needs to verify the identities of previously unknown machines and users are a few trust anchor certificates. In practice, though, there are number of <a href="http://en.wikipedia.org/wiki/X.509#Security" target="_blank">issues</a>. Those have been known for some time, but the <a href="http://en.wikipedia.org/wiki/Comodo_Group#Breach_of_security" target="_blank">recent</a> <a href="http://en.wikipedia.org/wiki/Diginotar#Issuance_of_fraudulent_certificates" target="_blank">breaches</a> in top-level CAs have shown that the problems and their consequences are far from theoretical. Probably the biggest PKI issue is that revocation of root certificates is not really supported. Most OSes and browsers come with a pre-configured set of trusted CA certificates (dozens of them!) and when a CA certificate is compromised there are two main ways to handle it: 1. tell users to remove it from the trust store; or, 2. issue an emergency update that removes the affected certificate. Expecting users to handle this is obviously unrealistic, so that leaves the second option. Windows modifies OS trust anchors by distributing patches via Windows Update, and browser vendors simply release a new patch version. However, even if an update removes a CA certificate from the system trust store, a user can still install it again, especially when presented with a 'do this, or you can't access this site' ultimatum. To make sure removed trust anchors are not brought back, the hashes of their public keys <a href="http://src.chromium.org/viewvc/chrome/trunk/src/net/base/cert_verify_proc.cc?view=diff&amp;r1=145876&amp;r2=145877" style="background-color: white;" target="_blank">are added</a> to a blacklist and the OS/browser rejects them even if they are in the user trust store. This approach effectively revokes CA certificates (within the scope of the OS/browser, of course) and takes care of PKI's inability to handle compromised trust anchors. However, it's not exactly ideal: even an emergency update takes some time to prepare, and even after it is out some users won't update right away, no matter how often they are being nagged about it. CA compromises are relatively rare and widely publicized though, so it seems to work OK in practice (for now, at least).</div><div><span style="background-color: white;"><br /></span></div><div>While CA breaches are fairly uncommon, end entity (EE) key compromise occurs much more often. Whether due to a server breach, stolen laptop or a lost smart card, it happens daily. Fortunately, modern PKI systems have been designed with this in mind -- CAs can revoke certificates and publish revocation information in the form of <a href="http://en.wikipedia.org/wiki/Certificate_Revocation_List" target="_blank">CRLs</a>, or provide online revocation status using <a href="http://en.wikipedia.org/wiki/OCSP" target="_blank">OCSP</a>. Unfortunately, this <a href="http://www.imperialviolet.org/2011/03/18/revocation.html" target="_blank">doesn't really work</a> in the real world. Revocation checking generally requires network access to a machine different from the one we are trying to connect to, and as such has a fairly high failure rate. To mitigate this most browsers do their best to fetch fresh revocation information, but if this fails for some reason, they simply ignore the error (soft-fail), or at best show some visual indication that revocation information is not available. To solve this Google Chrome has opted to <a href="http://www.imperialviolet.org/2012/02/05/crlsets.html" target="_blank">disable online revocation checks</a> altogether, and now uses its online update mechanism to proactively push revocation information to browsers, without requiring an application update or restart. Thus Chrome can have an up-to-date local cache of revocation information which makes certificate validation both faster and more reliable. This is yet another blacklist (Chrome calls it a '<a href="https://github.com/agl/crlset-tools" target="_blank">CRL set</a>'), this time based on information published by each CA. The browser vendor effectively managing revocation data on the user's behalf is quite novel, and not everyone thinks it's a good idea, but it has worked well so far.</div><h3> Android certificate blacklisting<br /></h3><div>In Android versions prior to 4.0 (Ice Cream Sandwich, ICS), the system trust store was a single Bouncy Castle key store file. Modifying it without root permissions was impossible and the OS didn't have a supported way to amend it. That meant that adding new trust anchors or removing compromised ones required an OS update. Since, unlike regular desktop OSes, updates are generally handled by carriers and not the OS vendor, they are usually few and far between. What's more, if a device doesn't sell well, it may never get an official update. In practice this means that there are thousands of devices that still trust compromised CAs, or don't trust newer CAs that have issued hundreds of web site certificates. ICS <a href="http://nelenkov.blogspot.jp/2011/11/using-ics-keychain-api.html" target="_blank">changed this</a> by making the system trust store mutable and adding an UI, as well as an SDK API, that allows for adding and removing trust anchors. This didn't quite solve PKI's number one problem though -- aside from the user manually disabling a comprised trust anchor, an OS update was still required to blacklist a CA certificate. Additionally, Android does <i>not</i> perform online revocation checks when validating certificate chains, so there was no way to detect compromised end entity certificates, even if they have been revoked.<br /><br /></div><div>This finally leads us to the topic of the article -- Android 4.1 (Jelly Bean, JB) has taken steps to allow for online update of system trust anchors and revocation information by introducing certificate blacklists. There are now two system blacklists:<br /><ul><li>a public key hash blacklist (to handle compromised CAs)</li><li>a serial number blacklist (to handle compromised EE certificates)</li></ul></div><div>The certificate chain validator component takes those two lists in consideration when verifying web site or user certificates. Let's look at how this implemented in a bit more detail.<br /><br /></div><div>Android uses a <a href="http://developer.android.com/guide/topics/providers/content-providers.html" target="_blank">content provider</a> to store OS settings in a system databases. Some of those settings can be modified by third party apps holding the necessary permissions, while some are reserved for the system and can only be changed by going through the system settings UI, or by another system application. The latter are known as 'secure settings'. Jelly Bean adds two new secure settings under the following URIs:<br /><ul><li><code>content://settings/secure/pubkey_blacklist</code></li><li><code>content://settings/secure/serial_blacklist</code></li></ul></div><div>As the names imply, the first one stores public key hashes of compromised CAs and the second one a list of EE certificate serial numbers. Additionally, the system server now starts a <code style="background-color: white;">CertiBlacklister</code> component which registers itself as a <code style="background-color: white;">ContentObserver</code> for the two blacklist URIs. Whenever a new value is written to those, the <code style="background-color: white;">CertBlacklister</code> gets notified and writes the value to a file on disk. The format of the files is simple: a comma delimited list of hex-encoded public key hashes or certificate serial numbers. The actual files are:<br /><ul><li>certificate blacklist: <code>/data/misc/keychain/pubkey_blacklist.txt</code></li><li>serial number blacklist: <code>/data/misc/keychain/serial_blacklist.txt</code></li></ul><div>Why write them to disk when they are already available in the settings database? Because the component that actually uses the blacklists is a standard Java&nbsp;<a href="http://docs.oracle.com/javase/6/docs/technotes/guides/security/certpath/CertPathProgGuide.html" target="_blank">CertPath API</a> class that doesn't know anything about Android and it's system databases. The actual class,&nbsp;<code>PKIXCertPathValidatorSpi,</code> is part of the <a href="http://www.bouncycastle.org/" target="_blank">Bouncy Castle</a> JCE provider, modified to handle certificate blacklists, which is an Android-specific feature and not defined in the standard CertPath API. The <a href="http://en.wikipedia.org/wiki/PKIX" target="_blank">PKIX</a> certificate validation algorithm the class implements is&nbsp;<a href="http://tools.ietf.org/html/rfc5280#page-71" target="_blank">rather complex</a>, but what Jelly Bean adds is fairly straightforward:</div><div><ul><li>when verifying an EE (leaf) certificate, check if it's serial number is in the serial number blacklist. If it is, return the same error (exception) as if the certificate has been revoked.</li><li>when verifying a CA certificate, check if the hash of it's public key is in the public key blacklist. If it is, return the same error as if the certificate has been revoked.</li></ul><div>The certificate path validator component is used throughout the whole system, so blacklists affect both applications that use HTTP client classes and the native Android browser and WebView. As mentioned above, modifying the blacklists requires system permissions, so only core system apps can use it. There are no apps in the AOSP source that actually call those APIs, but a good candidate to manage blacklists are the Google services components, available on 'Google experience' devices (i.e., devices with the Play Store client pre-installed). Those manage Google accounts, access to Google services and provide push-style notifications (aka, <a href="http://developer.android.com/guide/google/gcm/index.html" target="_blank">Google Client Messaging</a>, GCM). Since GCM allows for real-time server-initiated push notifications, it's a safe bet that those will be used to trigger certificate blacklist updates (in fact, some source code comments hint at that). This all sounds good on paper (well, screen actually), but let's see how well it works on a real device. Enough theory, on to</div></div></div><div><h3> <span style="background-color: white;">Using Android certificate blacklisting</span></h3></div><div>As explained above, the API to update blacklists is rather simple: essentially two secure settings keys, the values being the actual blacklists in hex-encoded form. Using them requires system permissions though, so our test application needs to either live in <code>/system/app</code> or be signed with the platform certificate. As usual, we choose the former for our tests. A screenshot of the app is shown below.</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-0uR0TmV5pLo/UBJcnxOw4wI/AAAAAAAAHt4/PAaU2A29UJ4/s1600/cert-blacklister.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://1.bp.blogspot.com/-0uR0TmV5pLo/UBJcnxOw4wI/AAAAAAAAHt4/PAaU2A29UJ4/s400/cert-blacklister.png" width="225" /></a></div><div><br /></div><div>The app allows us to install a CA certificate to the system trust store (using the <code><a href="http://developer.android.com/reference/android/security/KeyChain.html">KeyChain</a></code> API), verify a certificate chain (consisting of a the CA certificate and a single EE certificate), add either of the certificates to the system blacklist, and finally clear it so we can start over. The code is quite straightforward, see <a href="https://github.com/nelenkov/cert-blacklist" target="_blank">github repository</a> for details. One thing to note is that it instantiates the low level <code>org.bouncycastle.jce.provider.CertBlacklist</code> class in order to check directly whether modifying the blacklist succeeded. Since this class is not part of the public API, it is accessed using reflection.</div><div><br /></div><div>Some experimentation reveals that while the <code>CertiBlacklister</code> observer works as expected and changes to the blacklists are immediately written to the corresponding files in <code>/data/misc/keychain</code>, verifying the chain succeeds even after the certificates have been blacklisted. The reason for this is that, as all system classes, the certificate path validator class is pre-loaded and shared across all apps. Therefore it reads the blacklist files only at startup, and a system restart is needed to have it re-read the files. After a restart, validation fails with the expected error: 'Certificate revocation of serial XXXX'. Another issue is that while blacklisting by serial number works as expected, public key blacklisting doesn't appear to work in the current public build (JRO03C on Galaxy Nexus as of July 2012). This is a result of improper handling of the key hash format and will hopefully be fixed in a next JB maintenance release. Update: it is now fixed in AOSP master.</div><div></div><h3> Summary</h3><div>In Jelly Bean, Android takes steps to get on par with the Chrome browser with respect to managing certificate trust. It introduces features that allow for modifying blacklists dynamically: based on push notifications, and without requiring a system update. While the current implementation has some rough edges and does require a reboot to apply updates, once those are smoothed out, certificate blacklisting will definitely contribute to making Android more resilient to PKI-related attacks and vulnerabilities.</div>Nikolay Elenkovhttps://plus.google.com/117221066931981967754noreply@blogger.com0tag:blogger.com,1999:blog-2873091912851440312.post-78944686756065343952012-07-12T19:30:00.000+09:002012-08-07T12:17:15.420+09:00Jelly Bean hardware-backed credential storageAlong with all the user facing new <a href="http://developer.android.com/about/versions/jelly-bean.html" target="_blank">features</a> everyone is talking about, the latest Android release has quite a bit of security improvements under the hood. Of those only <a href="http://nelenkov.blogspot.jp/2012/07/using-app-encryption-in-jelly-bean.html" target="_blank">app encryption</a> has been properly announced, while the rest remain mostly covered up by upper level APIs. This, of course, is not fair, so let's call them up (the list is probably not exhaustive):<br /><ul><li>RSA and DSA key generation and signatures are now implemented in native code for better performance</li><li>TLS v1.2 support</li><li>improved system key store</li><li>new OpenSSL interface (engine) to the system key store</li><li>new key management HAL component -- <code>keymaster</code></li><li>hardware-backed <code>keymaster</code> implementation on Galaxy Nexus and Nexus 7</li></ul><div>The first two features are mostly self-explanatory, but the rest merit some exploration. Let's look into each one in turn.<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-9CL5R4sWIXM/T_hIHUPYRuI/AAAAAAAAHI0/47Cs1YluFew/s1600/IMG_20120707_112601.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://1.bp.blogspot.com/-9CL5R4sWIXM/T_hIHUPYRuI/AAAAAAAAHI0/47Cs1YluFew/s400/IMG_20120707_112601.jpg" width="300" /></a></div><br /></div><div><h3>System key store improvements</h3></div><div>As we have <a href="http://nelenkov.blogspot.jp/2011/11/ics-credential-storage-implementation.html" target="_blank">already discussed</a>, the system key store in Android is provided by a native daemon that encrypts secrets using a key derived from the device unlock password, stores them on disk and regulates key access based on UID. In ICS and previous versions, the <code>keystore</code> daemon simply stores opaque encrypted blobs and the only meatdata available (UID of owner and key name) was encoded in the file name under which blobs are stored. In Jelly Bean (JB), blobs also have a version field and a type field. The following key types are newly defined:</div><div><div><ul><li><code>TYPE_GENERIC</code></li><li><code>TYPE_MASTER_KEY</code></li><li><code>TYPE_KEY_PAIR</code></li></ul></div></div><div><code>TYPE_GENERIC</code> is used for key blobs saved using the previous get/put interface, and <code>TYPE_MASTER_KEY</code> is, of course, only used for the key store master key. The newly added <code>TYPE_KEY_PAIR</code> is used for key blobs created using the new <code>GENERATE</code> and <code>IMPORT</code> commands. Before we go into more details, here are the <code>keystore</code> commands added in Jelly Bean:</div><div><ul><li><code>GENERATE</code></li><li><code>IMPORT</code></li><li><code>SIGN</code></li><li><code>VERIFY</code></li><li><code>GET_PUBKEY</code></li><li><code>DEL_KEY</code></li><li><code>GRANT</code></li><li><code>UNGRANT</code></li></ul><div>In order to use a key stored using the pre-JB implementation, we needed to first export the raw key bytes, and then use them to initialize an actual key object. Thus even though the key blob is encrypted on disk, the plain text key eventually needed to be exposed (in memory). The new commands let us generate an RSA key pair and sign or verify data without the key ever leaving the key store. There is however no way to specify key size for generated keys, it is fixed at 2048 bits. There is no restriction for importing keys though, so shorter (or longer keys) can be used as well (confirmed for 512-4096 bit keys). Importing requires that keys are encoded using the PKCS#8 format. The sign operation doesn't do any automatic padding and therefore requires input data to be equal to the RSA key size (it's essentially performs raw RSA encryption using the private key). <code>VERIFY</code> takes the key name, signed data and signature value as input, and outputs the verification result. <code>GET_PUBKEY</code> works as expected -- it returns the public key in X.509 format. As mentioned above, the <code>keystore</code> daemon does access control based on UID, and pre-JB a process could use only a key it had created itself. The new <code>GRANT</code> / <code>UNGRANT</code> commands allow the OS to temporarily allow access to system keys to other processes. The grants are not persisted, so they are lost on restart.</div><h3><span style="background-color: white;">Key store OpenSSL engine</span></h3><div>The next addition to Android's security system is the keystore-backed&nbsp;<a href="http://www.openssl.org/docs/crypto/engine.html" target="_blank">OpenSSL engine</a> (pluggable <span style="text-align: -webkit-left;">cryptographic module</span>). It only supports loading of and signing with RSA private keys, but that is usually enough to implement key-based authentication (such as SSL client authentication). This small engine makes it possible for native code that uses OpenSSL APIs to use private keys saved in the system key store without any code modifications. It also has a Java wrapper (<code>OpenSSLEngine</code>), which is used to implement the <code>KeyChain.getPrivateKey()</code> API. Thus all apps that acquire a private key reference via the <code>KeyChain</code> API get the benefit of using the new native implementation.<br /><h3><code>keymaster</code> module overview</h3></div>And finally, time for our feature presentation -- the <code style="background-color: white;">keymaster</code> module and its hardware-based implementation on Galaxy Nexus (and Nexus 7, but that currently has no relevant source code in AOSP, so we will focus on the GN). Jelly Bean introduces a new <code style="background-color: white;">libhardware</code> (aka <a href="http://en.wikipedia.org/wiki/Hardware_abstraction_layer" style="background-color: white;" target="_blank">HAL</a>) module, called <code>keymaster</code>. It defines structures and methods for generating keys and signing/verifying data. The <code>keymaster</code> module is meant to decouple Android from the actual device security hardware, and a typical implementation would use a vendor-provided library to communicate with the crypto-enabled hardware. Jelly Bean comes with a default <code>softkeymaster</code> module that does all key operations in software only (using the ubiquitous OpenSSL). It is used on the emulator and probably will be included in devices that lack dedicated cryptographic hardware. The currently defined operations are listed below. Only RSA is supported at present.<br /><div><ul><li><code>generate_keypair</code></li><li><code>import_keypair</code></li><li><code>sign_data</code></li><li><code>verify_data</code></li><li><code>get_keypair_public</code></li><li><code>delete_keypair</code></li><li><code>delete_all</code></li></ul><div>If those look familiar, this is because they are pretty much the same as the newly added <code>keystore</code> commands listed in the previous section. All of the asymmetric key operations exposed by the <code>keystore</code> daemon are implemented by calling the system <code>keymaster</code> module. Thus if the <code>keymaster</code> HAL module is backed by a hardware cryptographic device, all upper level commands and APIs that use the <code>keystore</code> daemon interface automatically get to use hardware crypto.</div></div><h3>Galaxy Nexus <code>keymaster</code> implementation</h3><div>Let's look at how this is implemented on Galaxy Nexus, starting from the lowest level, the actual hardware. Galaxy Nexus is built using the Texas Instruments <a href="http://www.ti.com/general/docs/wtbu/wtbuproductcontent.tsp?templateId=6123&amp;navigationId=12843&amp;contentId=53243" target="_blank">OMAP4460</a> SoC, which integrates TI's <a href="http://www.ti.com/general/docs/wtbu/wtbugencontent.tsp?templateId=6123&amp;navigationId=12316&amp;contentId=4629&amp;DCMP=WTBU&amp;HQS=Other+EM+m-shield" target="_blank">M-Shield</a> (not to be confused with <a href="http://www.thales-esecurity.com/Products/Hardware%20Security%20Modules/nShield%20Solo.aspx" target="_blank">nShield</a>) mobile security technology. Among other things, M-Shield provides cryptographic acceleration, a secure random number generator and secure on-chip key storage. On top of that sits TI's Security Middleware Component (SMC), which is essentially a Trusted Execution Environment (TEE, Global Platform <a href="http://www.globalplatform.org/specificationsdevice.asp" target="_blank">specs</a> and <a href="http://www.globalplatform.org/documents/GlobalPlatform_TEE_White_Paper_Feb2011.pdf" target="_blank">white paper</a>) implementation. The actual software is by <a href="http://www.tl-mobility.com/">Trusted Logic Mobility</a>, marketed under the name <a href="http://www.tl-mobility.com/spip.php?rubrique6" target="_blank">Trusted Foundations</a>. Looking at this TI <a href="http://www.ti.com/lit/wp/swpy027/swpy027.pdf" target="_blank">white paper</a>, it looks like secure key storage was planned for ICS (Android 4.0), but apparently, it got pushed to back to Jelly Bean (4.1). Cf. this statement from the white paper: 'Android 4.0 also introduces a new keychain API and underlying encrypted storage that are protected by M-Shield hardware security on the OMAP 4 platform.'. &nbsp;</div><br /><div>With all the buzzwords and abbreviations out of the way, let's say a few words about TEE. As the name implies, TEE is defined as a logical execution environment, separate from the device's main OS, referred to as the REE (Rich Execution Environment). Its purpose is both to protect assets and execute trusted code. It is also required to be protected against certain physical attacks, although the level of protection is typically lower that that of a tamper-resistant module such as a Secure Element (SE). The TEE can host trusted applications (TAs) which utilize the TEE's services via the standardized internal APIs. Those fall under 4 categories:</div><div><ul><li>trusted storage</li><li>cryptographic operations</li><li>time-related</li><li>arithmetical (for dealing with big numbers)</li></ul><div>Applications running in the REE (the Android OS and apps) can only communicate with TAs via a low level Client API (essentially sending commands and receiving responses synchronously, where the protocol is defined by each TA). The Client API also lets the REE and TA applications share memory in a controlled manner for efficient data transfer.</div></div><br /><div>Finally, let's see how all this is tied together in the GN build of Jelly Bean. A generic PKCS#11 module (<code>libtf_crypto_sst.so</code>) uses the TEE Client API to communicate with a TA that implements hashing, key generation, encryption/decryption, signing/verification and random number generation. Since there doesn't seem to a 'official' name for the TA on the Galaxy Nexus, and its commands map pretty much one-to-one to PKCS#11 interfaces, we will be calling it the 'token TA' from now on. The GN <code>keymaster</code> HAL module calls the PKCS#11 module to implement RSA key pair generation and import, as well as signing and verification. This in turn is used by the <code>keystore</code> daemon to implement the corresponding commands.</div><br /><div>However, it turns out that the hardware-backed <code>keymaster</code> module is not in the latest GN build (<code>JRO03C</code> at the time of this writing. <i>Update</i>: according to this <a href="https://android.googlesource.com/device/samsung/tuna/+/b74801dc22bb4945ddf79b2e12e6328a862d68c3">commit message</a>, the reason for its being removed is that it has a power usage bug). Fortunately it is quite easy to build it and install it on the device (notice that the <i><code>keymaster</code></i> module, for whatever reason, is actually called <i><code>keystore.so</code></i>):</div><br /><div><pre>$ make -j8 keystore.tuna<br />$ adb push out/product/maguro/system/lib/hw/keystore.tuna.so /mnt/sdcard<br />$ adb shell<br />$ su<br /># mount -o remount,rw /system<br /># cp /mnt/sdcard/keystore.tuna.so /system/lib/hw<br /></pre></div><br /><div>Then all we need to do is reboot the device to have it load the new module (otherwise it will continue to use the software-only <code>keystore.default.so</code>). If we send a few <code>keystore</code> commands, we see the following output (maybe a bit too verbose for a production device), confirming that cryptographic operations are actually executed by the TEE:<br /><br /><pre>V/TEEKeyMaster( &nbsp;299): Opening subsession 0x414f2a88<br />V/TEEKeyMaster( &nbsp;299): public handle = 0x60011, private handle = 0x60021<br />V/TEEKeyMaster( &nbsp;299): Closing object handle 0x60021<br />V/TEEKeyMaster( &nbsp;299): Closing object handle 0x60011<br />V/TEEKeyMaster( &nbsp;299): Closing subsession 0x414f2a88: 0x0<br />I/keystore( &nbsp;299): uid: 10164 action: a -&gt; 1 state: 1 -&gt; 1 retry: 4<br />V/TEEKeyMaster( &nbsp;299): tee_sign_data(0x414ea008, 0xbea018fc, 36, 0xbea1195c, 256, 0xbea018c4, 0xbea018c8)<br />V/TEEKeyMaster( &nbsp;299): Opening subsession 0x414f2ab8<br />V/TEEKeyMaster( &nbsp;299): Found 1 object 0x60011 : class 0x2<br />V/TEEKeyMaster( &nbsp;299): Found 1 object 0x60021 : class 0x3<br />V/TEEKeyMaster( &nbsp;299): public handle = 0x60011, private handle = 0x60021<br />V/TEEKeyMaster( &nbsp;299): tee_sign_data(0x414ea008, 0xbea018fc, 36, 0xbea1195c, 256, 0xbea018c4, 0xbea018c8) <br />=&gt; 0x414f2838 size 256<br />V/TEEKeyMaster( &nbsp;299): Closing object handle 0x60021<br />V/TEEKeyMaster( &nbsp;299): Closing object handle 0x60011<br />V/TEEKeyMaster( &nbsp;299): Closing subsession 0x414f2ab8: 0x0<br />I/keystore( &nbsp;299): uid: 10164 action: n -&gt; 1 state: 1 -&gt; 1 retry: 4<br /></pre></div></div><br />This produces key files in the <code>keystore</code> daemon data directory, bus as you can see in the listing below, they are not large enough to store 2048 bit RSA keys. They only store a key identifier, as returned by the underlying PKCS#11 module. Keys are loaded based on this ID, and signing are verification are preformed within the token TA, without the keys being exported to the REE. <br /><br /><pre># ls -l /data/misc/keystore/10164*<br />-rw------- keystore keystore &nbsp; &nbsp; &nbsp; 84 2012-07-12 14:15 10164_foobar<br />-rw------- keystore keystore &nbsp; &nbsp; &nbsp; 84 2012-07-12 14:15 10164_imported<br /></pre><br /><div>So where are the actual keys? It turns out they are in the <code>/data/smc/user.bin</code> file. The format is, of course, proprietary, but it would be a safe bet that it is encrypted with a key stored on the SoC (or at least somehow protected by a hardware key). This allows to have practically an unlimited number of keys inside the TEE, without being bounded by the limited storage space on the physical chip.<br /><h3><code>keymaster</code> usage and performance</h3></div><span style="background-color: white;">Currently installing a PKCS#12 packaged key and certificate via the public </span><code style="background-color: white;"><a href="http://developer.android.com/reference/android/security/KeyChain.html#createInstallIntent()">KeyChain</a></code><span style="background-color: white;"> API (or importing via Settings-&gt;Security-&gt;Insall from storage) will import the private key into the token TA and getting a private key object using </span><code style="background-color: white;">KeyChain.getPrivateKey()</code><span style="background-color: white;"> will return a reference to the stored key. Subsequent signature operations using this key object will be performed by the token TA and take advantage of the OMAP4 chip's cryptographic hardware. There are currently no public APIs or stock applications that use the generate key functionality, but if you want to generate a key protected by the token TA, you can call </span><code style="background-color: white;">android.security.KeyStore.generate()</code><span style="background-color: white;"> directly (via reflection or by duplicating the class in your project). This API can potentially be used for things like generating a CSR request from a browser and other types of PKI enrollment.</span><br /><br /><div>The OMAP4 chip is advertised as having hardware accelerated cryptographic operations, so let's see how RSA key generation, signing and verification measure up against the default Android software implementations:</div><div><br /><table class="table"><caption>Average 2048-bit RSA operation speed on Galaxy Nexus</caption> <thead><tr> <th>Crypto Provider/Operation</th><th>Key generation</th> <th>Signing</th> <th>Verification</th> </tr></thead> <tbody><tr> <th>Bouncy Castle</th><td>2176.20 [ms]</td> <td>34.60 [ms]</td><td>1.90 [ms]</td> </tr><tr> <th>OpenSSL</th><td>2467.40 [ms]</td> <td>29.80 [ms]</td> <td>1.00 [ms]</td> </tr><tr> <th>TEE</th><td>3487.00 [ms]</td> <td>10.90 [ms]</td> <td>10.60 [ms]</td> </tr></tbody> </table><br />As you can see from the table above, Bouncy Castle and OpensSSL perform about the same, while the TEE takes more time to generate keys (most probably because it's using a hardware RNG, not a PRNG), but signing is about 3 times faster compared to the software implementations. Verification takes about the same time as signing, and is slower than software. It should be noted that this test is not exactly precise: calling the token TA via the <code>keystore</code> daemon causes a lot of TEE client API sessions to be open and closed which has its overhead. Getting more accurate times will require benchmarking using the Client API directly, but the order of the results should be the same. <br /><h3>Summary</h3>To sum things up: Jelly Bean finally has a standard hardware key storage and cryptographic operations API in the <code>keymater</code> HAL module definition. The implementation for each device is hardware-dependent, and the currently available implementations use the TEE Client API on the Galaxy Nexus and Nexus 7 to take advantage of the TEE capabilities of the respective SoC (OMAP4 and Tegra 3). The current interface and implementation only support generating/importing of RSA keys and signing/verification, but will probably be extended in the future with more key types and operations. It is integrated with the system credential storage (managed by the <code>keystore</code> daemon) and allows us to generate, import and use RSA keys protected by the devices's TEE from Android applications.</div>Nikolay Elenkovhttps://plus.google.com/117221066931981967754noreply@blogger.com15tag:blogger.com,1999:blog-2873091912851440312.post-52970139080738771112012-07-06T21:52:00.000+09:002012-09-28T14:20:48.951+09:00Using app encryption in Jelly BeanThe latest Android version, 4.1 (Jelly Bean) was <a href="https://developers.google.com/events/io/" target="_blank">announced</a> last week at <a href="https://developers.google.com/events/io/" target="_blank">Google I/O</a>&nbsp;with a bunch of new <a href="http://developer.android.com/about/versions/jelly-bean.html" target="_blank">features and improvements</a>. One of the more interesting features is app encryption, but there haven't been any details besides the short announcement: 'From Jelly Bean and forward, paid apps in Google Play are encrypted with a device-specific key before they are delivered and stored on the device.'. The lack of details is of course giving rise to guesses and speculations, some people even fear that they will have to repurchase their paid apps when they get a new device. In this article we will look at how app encryption is implemented in the OS, &nbsp;show how you can install encrypted apps without going through Google Play, and take a peak at how Google Play delivers encrypted apps.<br /><div class="separator" style="clear: both; text-align: center;"></div><br /><h3>OS support for encrypted apps</h3><div>The previous version of this article was based on Eclipse framework source packages and binary system images, and was missing a few pieces. As Jelly Bean source has now been open sourced, the discussion below&nbsp;has been revised and is now&nbsp;based on the AOSP code (4.1.1_r1). If you are coming back you might want to re-read this post, focusing on the second part.<br /><br />Apps on Android can be installed in a few different ways:<br /><ul><li>via an application store (e.g., the <a href="https://play.google.com/store" target="_blank">Google Play Store</a>, aka Android Market)</li><li>directly on the phone by opening app files or email attachments (if the 'Unknown sources' options is enabled)</li><li>from a computer connected through USB by using the <code>adb install</code> SDK command</li></ul></div><div>The first two don't provide any options or particular insight into the underlying implementation, so let's explore the third one. Looking at the <code>adb</code> usage output, we see that the <code>install</code>&nbsp;command has gained a few new options in the latest SDK release:</div><br /><pre>adb install [-l] [-r] [-s] [--algo &lt;algorithm name&gt; --key &lt;hex-encoded key&gt; <br />--iv &lt;hex-encoded iv&gt;] &lt;file&gt;<br /></pre><br /><div>The <code>--algo</code>, <code>--key</code> and <code>--iv</code> parameters obviously have to do with encrypted apps, so before going into details lets first try to install an encrypted APK. Encrypting a file is quite easy to do using the <code>enc</code> <a href="http://openssl.org/" target="_blank">OpenSSL</a> commands, usually already installed on most Linux systems. We'll use AES in CBC mode with a 128 bit key (a not very secure one, as you can see below), and specify an initialization vector (IV) which is the same as the key to make things simpler:<br /><br /><pre>$ openssl enc -aes-128-cbc -K 000102030405060708090A0B0C0D0E0F <br />-iv 000102030405060708090A0B0C0D0E0F -in my-app.apk -out my-app-enc.apk<br /></pre><br />Let's check if Android likes our newly encrypted app by trying to install it: <br /><br /><pre>$ adb install --algo 'AES/CBC/PKCS5Padding' --key 000102030405060708090A0B0C0D0E0F <br />--iv 000102030405060708090A0B0C0D0E0F my-app-enc.apk<br /> pkg: /data/local/tmp/my-app-enc.apk<br />Success<br /></pre><br />The 'Success' output seems promising, and sure enough the app's icon is in the system tray and it starts without errors. The actual apk file is copied in <code>/data/app</code> as usual, and comparing its hash value with our encrypted APK reveals that it's in fact a different file. The hash value is exactly the same as that of the original (unencrytped) APK though, so we can conclude that the APK is being decrytped at install time using the encryption parameters (algorithm, key and IV) we have provided. Let's look into how this is actually implemented.&nbsp;</div><br /><div>The <code>adb install</code> command simply calls the <code>pm</code> Android command line utility which lets us list, install and delete packages (apps). The component responsible for installing apps on Android has traditionally been the <code>PackageManagerService</code> and the <code>pm</code> is just a convenient frontend for it. Apps usually access the package service through the facade class <a href="https://developer.android.com/reference/android/content/pm/PackageManager.html" target="_blank"><code>PackageManager</code></a>. Browsing through its &nbsp; code and checking for encryption related methods we find this: <br /><br /><pre>public abstract void installPackageWithVerification(Uri packageURI,<br /> IPackageInstallObserver observer, int flags, String installerPackageName,<br /> Uri verificationURI, ManifestDigest manifestDigest,<br /> ContainerEncryptionParams encryptionParams);<br /><br /></pre><br />The <code>ContainerEncryptionParams</code> class looks especially promising, so let's peek inside: <br /><br /><pre>public class ContainerEncryptionParams implements Parcelable {<br /> private final String mEncryptionAlgorithm;<br /> private final IvParameterSpec mEncryptionSpec;<br /> private final SecretKey mEncryptionKey;<br /> private final String mMacAlgorithm;<br /> private final AlgorithmParameterSpec mMacSpec;<br /> private final SecretKey mMacKey;<br /> private final byte[] mMacTag;<br /> private final long mAuthenticatedDataStart;<br /> private final long mEncryptedDataStart;<br />}<br /></pre><br />The <code>adb install</code> parameters we used above neatly correspond to the first three fields of the class. In addition to that, the class also stores <a href="http://en.wikipedia.org/wiki/Message_authentication_code">MAC</a> related parameters, so it's safe to assume that Android can now check the integrity of application binaries. Unfortunately, the <code>pm</code> command doesn't have any MAC-related parameters (it does actually, but for some reason those are disabled in the current build), so to try out the MAC support we need to call the <code>installPackageWithVerification</code> method directly. <br /><br />The method is hidden from SDK applications, so the only way to call it from an app is to use reflection. It turns out that most of its parameter classes (<code>IPackageInstallObserver</code>, <code>ManifestDigest</code> and <code>ContainerEncryptionParams</code>) are also hidden, but that's only a minor snag. Android pre-loads framework classes, so even if you app bundles a framework class, the system copy will always be used at runtime. This means that all we have to do to get a handle for the <code>installPackageWithVerification</code> method is add the required classes to the <code>andorid.content.pm</code> package in our app. Once we have a method handle, we just need to instantiate the <code>ContainerEncryptionParams</code> class, providing all the encryption and MAC related parameters. One thing to note is that since our entire file is encrypted, and the MAC is calculated over all of its contents (see below), we specify 0 for both the encrypted and authenticated data start, and the file size as the data end (see <a href="https://github.com/nelenkov/jb-app-encryption" target="_blank">sample code</a>). To calculate the MAC value (tag) we once again use OpenSSL:<br /><br /><pre>$ openssl dgst -hmac 'hmac_key_1' -sha1 -hex my-app-enc.apk<br />HMAC-SHA1(my-app-enc.apk)= 0dc53c04d33658ce554ade37de8013b2cff0a6a5<br /></pre><br />Note that the <code>dgst</code> command doesn't support specifying the HMAC key using hexadecimal or Base64, so you are limited to ASCII characters. This may not be a good idea for production use, so consider using a real key and calculating the MAC in some other way (using JCE, etc.). <br /><br />Our app is mostly ready now, but installing apps requires the <code>INSTALL_PACKAGES</code> permission, which is defined with protection level <code>signatureOrSystem</code>. Thus it is granted only to apps signed with the system (ROM) key, or apps installed in the <code>/system</code> partition. Building a Jelly Bean ROM is an interesting excercise, but for now, we'll simply copy our app to <code>/system/app</code>&nbsp;in order to get the necessary permission to install packages (on the emulator or a rooted device).&nbsp;Once this is done, we can install an encrypted app via the <code>PackageManager</code> and Android will both decrypt the APK and verify that the package hasn't been tampered with by comparing the specified MAC tag with value calculated based on the actual file contents. You can test that using the sample application by slightly changing the encryption and MAC parameters. This should result in an install error.<br /><br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-EAv_Gti0Iiw/UAUbdRp06pI/AAAAAAAAHpM/TeowMz0eBrs/s1600/jb-app-encryption.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://4.bp.blogspot.com/-EAv_Gti0Iiw/UAUbdRp06pI/AAAAAAAAHpM/TeowMz0eBrs/s400/jb-app-encryption.png" width="225" /></a></div><br /><br />The <code>android.content.pm</code> package has some more classes of interest, such as <code>MacAuthenticatedInputStream</code> and <code>ManifestDigest</code>, but the actual APK encryption and MAC verification is done by the <code>DefaultContainerService$ApkContainer</code>, part of the <code>DefaultContainerService</code> (aka, 'Package Access Helper'). </div><br /><h3>Forward locking</h3><div>'Forward locking' popped up around the time ringtones, wallpalers and other digital 'goods' started selling on mobile (feature) phones. The name comes from the intention -- stop users from forwarding files they have bought to their friends and family. The main digital content on Android were originally apps, and as paid apps gained popularity, sharing (and later re-selling those) was becoming a problem. Application packages (APKs) have been traditionally world readable on Android, which made extracting apps from even a production device relatively easy. While world-readable app files might sound like a bad idea, it's rooted in Android's open and extensible nature -- third party launchers, widget containers and utility apps can easily inspect APKs to extract icons, widget definitions available intents, etc. In an attempt to lock down paid apps without losing any of the OS flexibility, Android introduced forward locking (aka, 'copy protection'). The idea was to split app packages into two parts -- a world-readable part, containing resources and the manifest (in <code>/data/app</code>), and a package readable only by the system user, containing executable code (in <code>/data/app-private</code>). The code package was protected by file system permissions, and while this made it inaccessible to users on most consumer devices, one only needed to gain root access to be able to extract it. This approach was quickly deprecated, and online <a href="http://developer.android.com/guide/google/play/licensing/index.html" target="_blank">Android Licensing</a> (LVL) was introduced as a replacement. This, however, shifted app protection implementation from the OS to app developers, and has had mixed results.</div><br /><div>In Jelly Bean, the forward locking implementation has been re-designed and now offers the ability to store APKs in an encrypted container that requires a device-specific key to be mounted at runtime. Let's look into the implementation in a bit more detail. <br /><h3>Jelly Bean implementation</h3>While encrypted app containers as a forward locking mechanism are new to JB, the encrypted container idea has been around since Froyo. At the time (May 2010) most Android devices came with limited internal storage and a fairly large (a few GB) external storage, usually in the form of a micro SD card. To make file sharing easier, external storage was formatted using the FAT filesystem, which lacks file permissions. As a result, files on the SD card could be read and written by anyone (any app). To prevent users from simply copying paid apps off the SD card, Froyo created an encrypted filesystem image file and stored the APK in it when you opted to move the app to external storage. The image was then mounted at runtime using Linux's <code>device-mapper</code> and the system would load the app files from the newly created mount point (one per app). Building on this, JB makes the container EXT4, which allows for permissions. A typical forward locked app's mount point now looks like this:<br /><br /><pre>shell@android:/mnt/asec/org.mypackage-1 # ls -l<br />ls -l<br />drwxr-xr-x system system 2012-07-16 15:07 lib<br />drwx------ root root 1970-01-01 09:00 lost+found<br />-rw-r----- system u0_a96 1319057 2012-07-16 15:07 pkg.apk<br />-rw-r--r-- system system 526091 2012-07-16 15:07 res.zip<br /></pre><br />Here the <code>res.zip</code> holds app resources and is world-readable, while the <code>pkg.apk</code> file which hold the full APK is only readable by the system and the app's dedicated user (<code>u0_a96</code>). The actual app containers are stored in <code>/data/app-asec</code> with filenames in the form <code>pacakge.name-1.asec</code>. ASEC container management (creating/deleting and mounting/unmounting) is implemented int the system volume daemon (<code>vold</code>) and framework services talk to it by sending commands via a local socket. We can use the <code>vdc</code> utility to manage forward locked apps from the shell: <br /><br /><pre># vdc asec list<br />vdc asec list<br />111 0 com.mypackage-1<br />111 0 org.foopackage-1<br />200 0 asec operation succeeded<br /><br /># vdc asec unmount org.foopackage-1<br />200 0 asec operation succeeded<br /><br /># vdc asec mount org.foopackage-1 000102030405060708090a0b0c0d0e0f 1000<br />org.foopackage-1 000102030405060708090a0b0c0d0e0f 1000 <br />200 0 asec operation succeeded<br /><br /># vdc asec path org.foopackage-1<br />vdc asec path org.foopackage-1<br />211 0 /mnt/asec/org.foopackage-1<br /></pre><br />All commands take a namespace ID (based on the package name in practice) as a parameter, and for the <code>mount</code> command you need to specify the encryption key and the mount point's owner UID (<code>1000</code> is <code>system</code>) as well. That about covers how apps are stored and used, what's left is to find out the actual encryption algorithm and the key. Both are unchanged from the original Froyo apps-to-SD implementation: Twofish with a 128-bit key stored in <code>/data/misc/systemkeys</code>:<br /><br /><pre>shell@android:/data/misc/systemkeys # ls<br />ls<br />AppsOnSD.sks<br />shell@android:/data/misc/systemkeys # od -t x1 AppsOnSD.sks<br />od -t x1 AppsOnSD.sks<br />0000000 00 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f<br />0000020<br /></pre><br />Forward locking an application is triggered by specifying the <code>-l</code> option of the <code>pm install</code> command or specifying the <code>INSTALL_FORWARD_LOCK</code> flag to <code>PackageManager</code>'s <code>installPackage*</code> methods (see <a href="https://github.com/nelenkov/jb-app-encryption" target="_blank">sample app</a>).<br /><br /></div><h3>Encrypted apps and Google Play</h3><div>All of this is quite interesting, but as we have seen, installing apps, encrypted or otherwise, requires system permissions, so it can only be used by custom carrier Android firmware and probably the next version of your friendly CyanogenMod ROM. Currently the only app that takes advantage of the new encrypted apps and forward locking infrastructure is the Play Store (who comes up with those names, really?) Android client. Describing exactly how the Google Play client works would require detailed knowledge of the underlying protocol (which is always a moving target), but a casual look at the newest Android client does reveal a few useful pieces of information. Google Play servers send quite a bit of metadata about the app you are about to download and install, such as download URL, APK file size, version code and refund window. New among those are the <code>EncryptionParams</code> which look very similar to the <code>ContainerEncryptionParams</code> shown above:</div><br /><pre>class AndroidAppDelivery$EncryptionParams {<br /> private int cachedSize;<br /> private String encryptionKey;<br /> private String hmacKey;<br /> private int version;<br />}<br /></pre><br />The encryption algorithm and the HMAC algorithm are always set to 'AES/CBC/PKCS5Padding' and 'HMACSHA1', respectively. The IV and the MAC tag are bundled with the encrypted APK in a single blob. Once all parameters are read and verified, they are essentially converted to a <code>ContainerEncryptionParams</code> instance, and the app is installed using the familiar <code>PackageManager.installPackageWithVerification()</code> method. As might be expected, the <code>INSTALL_FORWARD_LOCK</code> flag is set when installing a paid app. The OS takes it from here, and the process is the same as described in the previous section: free apps are decrypted and the APKs end up in <code>/data/app</code>, while an encrypted container in <code>/data/app-asec</code> is created and mounted under <code>/mnt/asec/package.name</code> for paid apps.<br /><br />So what does all this mean in practice? Google Play can now claim that paid apps are always transferred and stored in encrypted form, and so can your own app distribution channel if you decide to implement it using the app encryption facilities Jelly Bean provides. The apps have to be made available to the OS at some point though, so if you have root access to a running Android device, extracting a forward-locked APK or the container encryption key is still possible, but that is true for all software-based solutions.<br /><br /><b>Update</b>: while forward locking is making it harder to copy paid apps, it seems its integration with other services still has some issues. As reported by multiple developers and users <a href="http://code.google.com/p/android/issues/detail?id=34880" target="_blank">here</a>, it currently breaks apps that register their own account manager implementation, as well as most paid widgets. This is due to some services being initialized before <code>/mnt/asec</code> is mounted, and thus not being able to access it. A fix is said to be available (no Gerrit link though), and should be released in a Jelly Bean maintenance release.<br /><br /><b>Update 2</b>: It seems that the latest version of the Google Play client, 3.7.15, installs paid apps with widgets and possibly ones that manage accounts in <code>/data/app</code> as a (temporary?) workaround. The downloaded APK is still encrypted for transfer. For example: <br /><br /><pre>shell@android:/data/app # ls -l|grep -i beautiful<br />ls -l|grep -i beautiful<br />-rw-r--r-- system system 6046274 2012-08-06 10:45 com.levelup.beautifulwidgets-1.apk<br /></pre><br />That's about it for now. Hopefully, more detailed information both about the app encryption OS implementation and design and its usage by Google's Play Store will be available from official sources soon. Until then, get the <a href="https://github.com/nelenkov/jb-app-encryption" target="_blank">sample project</a>, fire up OpenSSL and give it a try.Nikolay Elenkovhttps://plus.google.com/117221066931981967754noreply@blogger.com18