On Friday, 21 October 2016, multiple waves of distributed denial of service (DDoS) attacks shut down major internet services across the United States and Europe. The attacking botnet army consisted mainly of printers, IP cameras, residential gateways, and baby monitors infected with Mirai malware. Mirai targets IoT devices, and though each individual IoT device was not very powerful, taken together these devices did significant damage. For many mainstream internet users, the need for strong IoT security became painfully obvious.

Many IoT devices have limited capabilities. They might run on batteries. They might have limited storage or computational capabilities. They may not support a full HTTP stack or may not always be connected to a communications channel.

With constrained devices and networks, it is a challenge to implement RESTful-like APIs within IoT devices. Add to this the immediate need for successful IoT security standards, and it is imperative to build on battle hardened and interoperable standards such as OAuth2 to operate successfully over a broad range of IoT architectures and scenarios.

The Basic OAuth2 Authorization Grant Flow

The authorization grant flow, described in Mobile API Security, is the most common starting point for OAuth implementations on the web:

Authorization is a two step process. In the first step, the client delegates to a user agent. For web and mobile apps, the user agent is usually a browser on the client. Since many IoT devices do not have a proper user interface, OAuth may delegate to a user agent off device. Through the user agent, the resource owner approves the client and presents credentials to an authentication server. The server returns an authorization code which is redirected back to the client. In the second step, the client authenticates himself and presents the authorization code back to the authorization server which, if satisfied, returns an access token, commonly a JSON WebToken (JWT).

Whoever possesses the access token can use it to access the authorized resources, so it is important that this token be kept secure. For that reason, OAuth2 mandates TLS to keep the communications channel secure.

Remotely Controlled Door Lock

Alice and Bob have invited Alice's parents over for dinner, but are stuck in traffic and cannot arrive in time; whereas Alice's parents are using the subway and will arrive punctually. Alice calls her parents and offers to let them in remotely, so they can make themselves comfortable while waiting. Then, Alice sets temporary permissions that allow them to open the door and shut down the alarm. She wants these permissions to be only valid for the evening since she does not like it if her parents are able to enter the house as they see fit.

We will add a few assumptions. First we will assume that Alice's mother will use her phone as a client to unlock the door. We will also assume that the door lock, as resource server, is not capable of using TLS to secure its communications. Since there is a well known relationship between authorization server and door lock, we will however assume that the door lock was pre-provisioned to secure its communications with the authorization server only. What cannot be presumed is that communications between Alice's mother's phone and the door lock is secure.

In this case, the path between client and resource server is not secure, so any access tokens sent across this channel could be stolen. A modification to the normal OAuth2 flow must be made to dynamically secure the channel by alternate means.

The flow begins as usual, Alice's mother, using the door lock app on her phone, presents her credentials to the authorization server. The authorization server, using Alice's temporary permissions, returns an authorization code to the her mom's phone.

To secure the phone to door lock channel, the client app generates a random key value which will be used later to demonstrate proof of possession of the access token. The app sends its client id, secret, authorization code, and possession key back to the authorization server.

The authorization server returns an access token to the client app and also send the possession key to the door lock using it's pre-provisioned security channel. As a result Alice's mother's client app and the door lock share a secret, the poss key, which they can use to sign and/or encrypt all following communication between app and door lock. In this way, the access token cannot be used without knowing the shared key. Depending on policy, after the door is unlocked, the possession key could be discarded to prevent repeated use of the access token

Not All Secrets Considered Equal

In requesting the access token, the door lock app sends the auth code with two secrets, the client secret and the possession key. The possession key is dynamically generated, can be held in memory, and is relatively unique and short-lived, so it can be considered quite secure. The client secret is a static value within the app, and cannot be considered very secure. If an attacker reverse engineers this value, it could construct a fake app which looks genuine and, holding a valid client secret, appears perfectly authentic to the OAuth2 authorization grant flow. There are plenty of these apps causing mischief in the app stores, and if Alice's mother has been tricked into using the fake app, it can generate a possession key, grab the access token, and secure the door lock channel for its own malicious use.

A more secure solution would remove the client secret from the app and move it to a remote attestation service. The service can dynamically verify the app's authenticity, using a random set of high-coverage challenges to detect any app replacement, tampering, or signature replay. If the responses satisfy the challenges, the authentication service returns an authenticating, time-limited client integrity token signed by the now removed client secret. The integrity token can be verified by the the resource server, which also knows the client secret. The attestation service and the resource server share the client secret, but it is no longer stored in the client. An example of this type of service is Approov (full disclosure, from my company, CriticalBlue).

In this flow, an authorization mediator checks the attestation service client integrity token for the authorization service. We will expand on this pattern later.

A dynamic attestation service, such as Approov, provides extremely reliable positive authentication of untampered apps. Being dynamic, this service authenticates the app during both phases of the authorization grant flow as well as frequently during authorized operation. As a result, every authorized API call ise made by an authentic app for an authorized user.

Personal Medical Device

In the second scenario, Alice has a personal medical device to monitor certain health parameters. The device is low power and communicates with Alice's phone using older bluetooth with very limited security.

The main difference between this scenario and the previous is that the medical device, the resource server in this case, cannot communicate with the authorization server. Though they cannot communicate, there is still a well known relationship between authorization server and medical device, so we can pre-provision one or more public-private key pairs between them. We still cannot presume secure communications between Alice's phone and her medical device.

The flow starts the same as in the door lock scenario. The client app authenticates with the authorization server, and the authorization server returns an authorization code The client app generates a possession key and sends the client id, secret, authorization code, and possession key back to the authorization server.

This time, however, the authorization server uses one pre-provisioned key pair to encrypt the possession key and adds it to the access token claims as a confirmation claim. The access token is signed by the same or a second key pair. When the medical device receives the access code, it verifies the signature, checks the claims, and decrypts the key. If everything checks out, the device uses the now shared possession key to secure communication with the phone app.

As with the previous scenario, the access code grant is still using a less than secure static secret within the phone app, and just as before, this should be hardened using the same attestation service upgrade.

Conclusion

With such a wide range of usage scenarios with a broad variation in device and network capabilities, it is important to build a consistent set of security protocols which can be mixed and matched as needed. OAuth2 authorization flows are a well established place to start. Many vendors are extending from OAuth2 to reach their security objectives, and the IETF ACE working group is developing open standards covering a broad range of IoT use cases.

Improving security by hardening the weakest links is always important, so strengthening client authenticity by replacing static secrets with attestation services is useful in almost all IoT use cases.

The Mirai attack from October 2016 sent us a stern warning. Establishing consensus on a standard set of security profiles which cover the broad range of iot use cases is a key next step towards strengthening interoperable security between IoT resources.