TL;DR: If a public client respects the RFC 7009 spec and does not authenticate the revocation request, then Doorkeeper does not actually revoke the access token. Upgrade to versions 4.4.0, 5.0.0.rc2 or later

The Problem

Any OAuth application that uses public/non-confidential authentication when interacting with Doorkeeper is unable to revoke its tokens when calling the revocation endpoint.

A bug in the token revocation API would cause it to attempt to authenticate the public OAuth client as if it was a confidential app. Because of this, the token is never revoked.

The impact of this is the access or refresh token is not revoked, leaking access to protected resources for the remainder of that token's lifetime.

If Doorkeeper is used to facilitate public OAuth apps and leverage token revocation functionality, upgrade to the patched versions immediately.

Vulnerable Versions

4.2.0 – 4.3.2
5.0.0.rc1

Impact

All public, non-confidential clients respecting the RFC will not have their access or refresh tokens revoked when sending a valid, well-formed & unauthenticated revocation request to doorkeeper.

Any such clients relying on Doorkeeper's revocation functionality are susceptible to a session replay attack, even after the victim terminates their session via a revocation/log out.

Now obviously the attacker must already be able to obtain the access token, but without revocation the victim has a false sense of security & cannot limit damage to their account. Additionally, clients relying on the Doorkeeper revocation endpoint to revoke all other issued tokens on password change, password reset, etc. would also be impacted by this problem.

The Fix

Doorkeeper needed a structural update so it is able to define which OAuth client application is intended to be public or confidential.

With that now available, the tokens revocation API knows to either enforce authentication (as required for confidential clients) or accept just the client ID (as is the case for a public client).

CVSSv3

Attack Details

Stored XSS on the OAuth Client's name will cause users being prompted for consent via the "implicit" grant type to execute the XSS payload.

The XSS attack could gain access to the user's active session, resulting in account compromise.

Any user is susceptible if they click the authorization link for the malicious OAuth client. Because of how the links work, a user cannot tell if a link is malicious or not without first visiting the page with the XSS payload.

The requirement for this attack to be dangerous in the wild is the software using Doorkeeper must allow regular users to create or edit OAuth client applications.

If 3rd parties are allowed to create OAuth clients in the app using Doorkeeper, upgrade to the patched versions immediately.

Additionally there is stored XSS in the native_redirect_uri form element.

There comes a time where regardless of your unit testing you have errors in production. But, alas, the exception isn't in your controller itself but the view!

There has been a quick and dirty way of getting the full stack trace in console for a while but it fails to hit the mark in Rails 4 and Rails 5.

Below is the full copy/paste snippet that will let you try and render a view in console. Very helpful with JSON APIs that use jbuilder that can have complex logic and multiple partial inheritance. Any error or exception will generate a full stack trace for you to debug with.

The Problem

Devise-Two-Factor implements RFC 6238 which defines a way to provide what is commonly known as Two-Factor Authentication (2FA).

RFC 6238 states that once a valid OTP is successfully proven to the server, the server must reject all subsequent validation attempts of that OTP for a given timestep. In other words, as the name implies, the OTP can only be used once.

The Impact

Attacker has a window of opportunity of the timestep (30 seconds by default)

Attacker must shoulder-surf or MiTM the OTP code.

Attacker must already know the victim's password.

Satisfying these conditions will defeat two-factor authentication for the victim, in that one authentication scenario.

However, the Man-in-The-Middles (MiTM) scenario is moot since, if an attacker can MiTM the
connection, they can just obtain the granted session secret from the response instead.

Solution

Because the server (aka verifier) must now "remember" which tokens are valid and which are not, a storage mechanism must be used. Caches are a pallatable solution but permenant storage is desired.

In the case of Devise-Two-Factor, the verifier simply writes down the last successful OTP code. When a prover supplies an OTP, the verifier does two things:

Checks if the given code is valid for the given timestep

Checks if the given code is not the same as the previous successful OTP code

If both conditions pass then the user (prover) is valid and should be issued an authentication token.

The supplied code will never* match the previously stored value unless its two attempts in the same timestep (aka our replay attack).

*there is a tiny chance that a future timestep will produce the same code because of the pigeonhole principle. In which case the prover (user) will receive a false-positive error. Trying again in a T+1 timestep will resolve the issue. Because this probability is so slim (I think 1 in a million), it's considered "never to occur".

Patch/Workaround

Applicable to Other Libraries

It is extremely likely Devise-Two-Factor is not the only library implementing TOTPs incorrectly, failing to guard replayed OTP codes. Checking other libraries in various languages would likely expose the same exploit.

TL;DR: If a public client respects the RFC 7009 spec and does not authenticate the revocation request, then Doorkeeper does not actually revoke the access token. Upgrade to version 4.1.1, 3.1.1, or 2.2.3

Furthermore, RFC 7009 makes no mention of supporting anything but providing the token parameter so all of the alternative methods for finding the access token via the helper fail to save the day (eg Authorization header's Basic or Bearer values).

Finally, by spec definition-- despite not finding a token to revoke–
Doorkeeper will respond with a 200 empty body. Now this on its own is fairly innocuous but likely "hid" this lack of revocation from developers-- Leaving them none the wiser. At least that was the case with me.

Why Authenticating is Pointless

As I stated before, simply knowing the access token implicitly verifies the request.

The access token is considered a secret which, if stolen via session hijacking, let's an attacker impersonate the user. In other words, the access token represents the trusted user.

Now if we're posting to the revocation endpoint, the idea of authenticating the request in the usual places or authorizing the token parameter matches the authentication is pointless.

If it doesn't match, an attacker can simply make their authentication match that of the token's value.

The very existence of token is sufficient to represent a trusted user and anything more is just fluff– It doesn't add any additional security.

Impact

All public, non-confidential clients respecting the RFC will not have their access or refresh tokens revoked when sending a valid, well-formed & unauthenticated revocation request to doorkeeper.

Any such clients relying on Doorkeeper's revocation functionality are susceptible to a session replay attack, even after the victim terminates their session via a revocation/log out.

Now obviously the attacker must already be able to obtain the access token, but without revocation the victim has a false sense of security & cannot limit damage to their account. Additionally, clients relying on the Doorkeeper revocation endpoint to revoke all other issued tokens on password change, password reset, etc. would also be impacted by this problem.

Solution

The true solution is to fully comply with the RFC 7009 and do not authenticate/authorize the request. This means removing reliance on #dookeeper_token and assert presence of the token directly, revoking it if it's present.

Update (May, 2017): AOSP has written a FORMAT.md document that is more up-to-date than this article. It will likely be the living document for detailing how animation works in Android. This article is kept for posterity.

So you're an Android wizz and want to further customize your wicked Android experience. You've perused the plethora of custom boot screen animations and nothing tickles your fancy, or you've installed a popular ROM like CyanogenMod and you're just not happy with it.

You've got your animation ready, you've exported it as a series of sequenctial PNGs, and you're ready to go!

Alas, what the #@$!&@ is the desc.txt used for? And why the many parts folders? This article seeks to answer that question.

Straight to the source

For the expertly technical, you can read the BootAnimation.cpp source code to see exactly how Android boot animations work. Because things change in major releases of Android, not all desc.txt files are created equal.

Basic Premise

Boot animations are a series of PNG images, loaded & displayed one after the other, to create the illusion of video. This is a smaller memory & CPU footprint than decoding an actual video file with codecs.

Some boot animations have intros, a main loop, and then an outro. This is what the part* folders & desc.txt allow. You don't have to have intros & outros, but it make for a much more polished effect. The above video example of CopperheadOS's boot animation (made by yours truly) is comprised of an intro, main loop, and outro.

You group your PNGs into folders and specify, for each part:

How many times the PNGs should loop before the next sequence plays

How long should the last frame pause before continuing

And if Android is allow to abort the animation early if the OS is fully loaded

Your .zip file (which cannot be compressed, it's meant to just be a blob!) should be laid out in the following fashion:

You need a minimum of one part0 folder, containing your parts. It appears there's no programmed limit to how many PNGs per part, and how many parts in total. Note that because they're PNG images, you can easily have a very large boot animation. I recommend keeping it under 20MB.

Anatomy of a desc.txt file

Note what you can define here is limited by the version of Android. Not all versions support the c part type. That was introduced in Android Jelly Bean (4.1–4.3.1)

Our example desc.txt

700 420 30
c 1 15 part0 000000
c 0 0 part1 FF0000
c 1 30 part2 0000FF

Top line

[width] [height] [frames per second]

So in our above example:

700 is the width of the PNGs

420 is the height of the PNGs

30 is the frames per second. I've seen 60, 30, and 10 used here.

Subsequent lines

You want to define each part on its own line:

[type] [loop count] [pause] [path] [bg colour (optional)]

You can have transparent PNGs that will show a background colour but almost all animations I see have a matte, full black background in the PNGs and the optional colour section absent in the desc.txt.

Note: [type] with value c is only supported in Android Jelly Bean or later. It must be p for older Android versions.

Second Line (The Intro)

c is the "type". If it is c and the OS has finished loading halfway through the sequence loop, then it will finish the loop & exit gracefully. If it is p then the animation will abort mid-sequence if the OS has loaded. Unless you have an older Android OS, you typically want c for a refined animation.

1 is the loop count. This means play once then proceed to the next sequence.

15 is the "pausing" in frames of the last frame in the sequence before going to the next sequence. So in this example it's pausing 0.5 seconds because 15 is half of 30

part0 is the path to the collection of PNGs to use in this part

000000 is the background colour in hex. This value is full black and the default colour. You can define a desc.txt with this entry absent (and most do).

Third Line (The Main Loop)

c is still used, because this is where the OS will eventually finish loading. We want a graceful exit to the outro.

0 is a special loop count. It means "loop forever". This part is where the OS will load and exit. Because we chose c it will gracefully exit to the end of the animation. If p was chosen, this is where the animation would abort & exit.

0 means no delay. Duh.

part1 is the path to the collection of PNGs to use in this part

FF0000 means a red background for this part

Fourth Line (The Outro)

c is still used, because otherwise the outro would abort.

1 is used because, thanks to c, the outro sequence must play even if the OS has loaded; so we play it once. If you chose p for the outro but c for prior parts, it'd exit immediately without playing. If you chose 0 here, the boot animation would loop forever and never stop!

30 means pause a final second before showing the OS. Maybe your outro is fading to black so your final frame is a black screen.

part2 is the path to the collection of PNGs to use in this part

0000FF means a blue background for this part

And there you have it!

I hope you found this useful. There's a lot of misinformation and incorrect forum posts out there detailing how to properly write a desc.txt file.

Feel free to leave a comment if you have any questions and I'll try to answer them to the best of my ability.

RL Grime's music video

For some reason my brain absolutely loves looping over this same song over and over again. It's called Core by RL Grime and the visuals are stunning. According to the YouTube description, the music video was directed by David Rudnick & Daniel Swan. Props to those two.

I extracted the above loop from the music video and, using HandBreak, converted the file to a H.264 video in a MP4 container.

Ok, so, I have this mesmerizing loop but how do I get it on my computer as a screensaver? Enter Quartz Composer.

XCode's Quartz Composer

Quartz Composer is a crazy simple way to process and render graphics on Mac and iOS. In the above screenshot, that's literally all I had to build for my screensaver.

Earlier this month, I had the opportunity to speak at the Toronto EmberJS Meetup, a monthly meeting at Pharmacy (pictured above), about Ember CLI and Content Security Policy. I assembled a demo application with a variety of of CSP errors, and walked through on how to fix each one in the application.

The app explained some elements of CSP, why things are the way they are, and how to configure your Ember CLI app. You can download the app here to interactively learn about CSP.

Tada! You've locked down what type of resources are allowed to be loaded from where. When a user using a browser that supports CSP (and a lot of them do) visits your site, it will only load resources that have been whitelisted by your policy. If an attacker injects a persistent XSS into your app loading, say, http://evildomain.com/keylogger.js it will fail because it's not coming from trustedscripts.example.com or your domain.

Why should I care?

Well, the Ember CLI maintainers believe that security should be consciously in the mind of web developers. I agree. We should be responsible for building secure-by-default apps and not patch after the fact.

Having CSP in Ember increases your understanding of how an attacker could harm your users or compromise your app.

Won't this break my app?

Nope! For now they have it as "Report Only" which means when in a dev environment your console will fill up with all violations of the policy so you can patch your backend or Handlebars template.

]]>I had the opportunity to help my employer, FreshBooks, implement a responsible disclosure policy. As it turns out, it's very difficult to offer a PGP key while maintaining trust, security, and convenience.

In this post I hope to outline the struggles, the roadblocks, and practical strategy surrounding PGP key management

I had the opportunity to help my employer, FreshBooks, implement a responsible disclosure policy. As it turns out, it's very difficult to offer a PGP key while maintaining trust, security, and convenience.

In this post I hope to outline the struggles, the roadblocks, and practical strategy surrounding PGP key management for a ~100 person company.

The Tweet that Started it All

It all started with the above tweet. One of my favourite security-conscious people I follow was complaining about the piss-poor state of secure responsible disclosure pages that vendors offer. It got me to thinking, "Gee whiz, does FreshBooks even have a responsible disclosure policy?" Not surprisingly the answer was no.

I decided that with FreshBooks growing bigger and more important every quarter, we owe it to our users to explicitly provide a mechanism of disclosure that researchers will be comfortable with. The more acceptable we are, the more likely they are to disclose vulnerabilities or exploits to us.

If we were going to do this, we were going to do it right. That means no plaintext pages, no insecure email communications, and no lack of common mechanisms that security researchers expect.

Actual response from tech company's security team: "PGP email doesn't play well with Zendesk so I don't want to encourage it unless needed."

The initial idea was to offer a general security inbox security@freshbooks.com with a PGP key with fingerprint offered on the website and available on PGP key servers. If we were to do this right, it's quite a non-trivial option when the number of recipients for security@ is greater than one.

The PGP Encryption & Trust Strategy

Since we deal in customer financial data, I'd hope that any critical vulnerability gets to us securely. That means creating a strategy that people use… Both internally and externally. There's no use in designing a Fort Knox of a system if your co-workers won't bother to use it!

Above: Sketching out some of the PGP key ideas for the managers briefing.

There are a lot of ways to skin this cat, and I spent some time musing over the best options. Our systems administrator wanted to enforce that when a security team member leaves the company that we have a way to revoke their access. With a shared PGP keypair for a security@ mailbox, that's pretty hard to do.

The question is: when a team member departs the company, what prevents them from being malicious with the PGP key? They carry the trust and brand of FreshBooks Security Team <security@freshbooks.com>. Even if you had a revocation certificate handy, the departed could give away the PGP keypair to your competitor in way that you don't know about for months or years!

✗ Option 1: Shared key only

This is the most naïve implementation.

There is a single PGP key for the "FreshBooks Security Team" <security@freshbooks.com> that a researcher encrypt emails to & receive signed correspondence from.

All team members have a copy of the keypair on their workstations and all members have access to the security@ mailbox.

Advantages:

Single point of contact for security researcher.

Anyone on the team can decrypt and respond; potentially faster turnaround time.

Disadvantages:

Risk of a fired, rogue employee secretly copying the private key before their departure and maliciously signing and/or decrypting messages.

✗ Option 2: Individual keys only

So it's the same physical setup as before but instead of a single PGP key available for security@freshbooks.com we publicly list a team of security officers all with their individuals keys (all of which are intersigned with one-another) and ask the researcher to multi-recipient encrypt their message. Further communication will happen on a per-individual basis.

Advantages:

Web of Trust is (somewhat) maintained with the use of key servers and inter-personal key signing

Step 1:

Security researcher obtains a copy of our public key and sends an encrypted email to security@freshbooks.com

Step 2:

The encrypted email received by Team Lead, Alice, and Bob's inboxes. Since it's encrypted for security@freshbooks.com, Alice and Bob cannot read the email.

Step 3:

Team Lead, being the only owner of the 0x00DEADBEEF <security@freshbooks.com> PGP key, decrypts the message and forwards it, decrypted (or re-encrypted?), to the rest of the team. If the team lead is on vacation or away, have one of the team members respond to the email (signing their response with their personal key) saying that they'll get back to them ASAP once the employee with access to the security@freshbooks.com private key returns.

Step 4:

After the team meets, someone is assigned as the liaison or point of further contact between the team and the researcher. That team member continues to communicate with the researcher signing with their PGP key. That team member is responsible for keeping the rest of the team up-to-date.

When A Team Member Departs:

When a team member leaves the company they revoke their PGP key. In addition, the security@freshbooks.com key and other team members' keys revoke their signature against the departed's key. The keys are then exported and re-sent to the key servers, updating the trust model to exclude the departed team member.

This conveys to a researcher that the departed team member is to no longer be trusted. Any attempt by a departed team member to perform rogue actions (such as impersonation of the security team, selling their key to a competitor) will be thwarted by the trust model.

When The Team Lead Departs:

Similar to the above model but, in addition, the security@freshbooks.com key will also be revoked and re-generated to which a new Team Lead will own. Prior to revocation the old key will sign the new key and vice-versa showing that the newly generated key carries the trust of the old one. All team members will sign the new key and revoke their signatures on the old.
The public website's Responsible Disclosure policy page should be updated to point to the newly generate security@freshbooks.com key.

Advantages:

Web of Trust is maintained with the use of key servers and inter-personal key signing

Risk of a fired, rogue employee secretly copying the private key before their departure and maliciously signing and/or decrypting messages is strongly mitigated unless Team Lead becomes evil.

All correspondence can be encrypted and signed.

Disadvantages:

Potentially slower turnaround time for decrypting messages

Team Lead is a single point of failure

Main key revocation is tied with the departure of the Team Lead.

Each email message has to be relayed to the rest of the team.

The Practical Pushback

This is obviously a fairly complicated structure, and many companies don't bother going through with the effort of implementing proper PGP key management. It's usually the big guns who have dedicated security departments who implement this mess.

Our head of Operations, upon being presented with this strategy, asked about encrypted web forms.

It's not a completely crazy to not offer emails with PGP and just opt for a web form. GitHub does it, after all and it appears there's a move to adopt forms:

If you have a dedicated path for security researchers to report vulns in your products, please make it a HTTPS form, not an email address.

Using a web form could have unintended side-effects such as logging and caching somewhere along the line.

For now we're offering an encrypted web form and a single PGP security@freshbooks.com key that the head of Operations has access to. The web form, under the hood, sends an email to security team which is secured under STARTTLS.

In the next year, we'll move to the full PGP management structure outlined in Option 3.

The PGP Key Generation

So far, I've only described our management strategy when people join and leave the company. The are many ways to go about key generation and storage. Below is more-or-less our plan:

All keys should be 4096-bit RSA (sign) & RSA (encrypt) that expire every 5 years

Generate a key revocation file for the main key and store it on hard media, such as archival CD-ROM

Back up the main keypair's private key on archival media and securely store medium.

Randomly generate the passphrases using Diceware or similarly entropic, but human memorable, strategy

Encourage use of signing keys internally within the company, publishing the signed keys to a key server.

The private keys should only live on the workstations of the owners of said keys (which are protected behind Active Directory)

The State of The Union

At the time of this writing, most levels of management have approved my proposal and Responsible Disclosure policy draft. We're in the process of finalizing the team, defining an explicit response protocol (such as promising an initial response within 24 hours!), and defining internal changes that need to happen. The internal policy strives to conform to the RFPolicy v2.

It's exciting to see that with my persistence that FreshBooks realizes the importance of such a project.

I expect to see the policy and protocol live and ready to go in the near future.

Update!

The responsible disclosure policy is live and so far we've had two disclosures!