Security researchers in the USA have just disclosed a flaw in PayPal's two-factor authentication (2FA) system.

As you probably know by now, 2FA is a way of boosting login security so that just knowing, or guessing, someone's username and password is not enough.

Most online 2FA systems work by asking for your username and password, which may stay the same for weeks, months or even years, and then asking you for a passcode that changes every time you login.

Your passcode might come from a dedicated security token that displays an unguessable sequence of numbers that changes every minute, or you might receive a text message on your mobile phone with the passcode in it.

Either way, the idea is simple, and powerful:

The passcode isn't sent to you via the same device you usually use to enter your username and password, so even if a crook has infected your computer with malware and can snoop on everything you do, he's still only half way there.

The passcode is only valid once, so even if a crook does manage to intercept it when you finally type it in, it isn't much use.

That's why many financial organisations and payment processors, including PayPal, have made 2FA available to their customers.

It isn't a silver bullet against cybercrime, but it does make things much trickier for the crooks.

Token-based 2FA can't eliminate cybercrime, but it makes things much tougher for the crooks.

The PayPal flaw

Here's the quick version of what went wrong in PayPal's system.

Before we start, it's probably worth pointing out that the researchers who disclosed the flaw work for a company that produces a 2FA product, but not the one that PayPal uses; their product is marketed as a bit of a technology disruptor, boldly claiming to "democratize the use and deployment of strong authentication so that all users can benefit from them, not just the Fortune 500."

And when PayPal announced that it would be rolling out a complete fix by 28 July 2014 and asked the researchers if they were willing to delay their disclosure for another month, they said, "No."

But they did wait until PayPal had implemented a mitigation that prevents the flaw from being abused to bypass 2FA.

So, given that the flaw is no longer exploitable, and that there are some important lessons to be learned, here we go.

It all started with a chap called Dan Saltman, who noticed that when he tried to login to his PayPal account from his iPhone, it wouldn't let him in, because PayPal's mobile apps don't yet support 2FA.

He could put in his username and password, but because the iPhone app had no way of dealing with his 2FA passcode, it bailed out at that point.

But Dan also noticed that if he put the iPhone into flight mode somewhere in the middle of trying to login, thus abruptly killing all data flow in and out, he'd sometimes end up logged when he later reactivated his data connection.

You don't need to be an expert in protocols or cryptography to realise that there is something very wrong with that.

It implies that there is something about the login process that puts the detail of whether to require 2FA or not into the hands of the client.

→ This sort of "client chooses" problem is typically associated with backward compatibility. Many protocols live with the past by getting the server to ask the client to use the latest and greatest level of security if it can, but allowing the server to fall back on a less secure method if the client cannot. An example: many chip-and-PIN payment systems will fall back to using the magstripe on cards that don't have a chip.

The researchers wondered, "Was this apparent protocol glitch an obscure piece of blind luck in timing, or could it be exploited systematically?"

Unfortunately for PayPal, the researchers were able to write Python code that reliably automated the 2FA bypass.

Greatly oversimplified, the bypass went something like this:

Start logging in via the general-purpose PayPal login URL, with username and password.

Get back from PayPal a session_token (i.e. authorisation to proceed) plus a notification saying 2fa_enabled=true.

From this, you might reasonably assume that the session_token would be useless, because it wouldn't work until after the 2FA validation stage.

Indeed, PayPal's mobile apps simply bail out here, knowing that they can, at least in theory, go no further.

What went wrong?

What could PayPal have done differently?

Firstly, the 2fa_enabled=true should be a statement by the server, not merely a suggestion to the client.

In fact, if you tell a client that 2fa_enabled=true and the client later tries to claim that 2fa_enabled=false, you should treat that as a protocol fault (or a hacking attempt) and invalidate the login automatically.

Secondly, since the server knows that 2fa_enabled=true and thus that authentication is not yet complete, it shouldn't hand out a session_token until after 2FA validation has succeeded.

It doesn't make technical or intellectual sense to tell someone that 2FA verification is still needed yet hand them an authentication token at the same time.

That's a bit like answering a knock at your front door with a, "Who's there?" and then throwing the door open anyway so you can hear the reply.

Thirdly, because PayPal knows that its mobile apps don't support 2FA, the URLs specific to processing payments from mobile devices shouldn't work for accounts where 2fa_enabled=true.

That's like answering a knock at your front door with a, "Who's there?", getting the reply, "Someone who isn't allowed into your house," and then throwing open the door anyway.

What next?

It's important to notice that this flaw applies to your account even if you don't use PayPal's mobile app yourself.

A crook who has your username and password, stolen by a keylogger on your laptop, for example, could have used PayPal's mobile payment system to login and make payments without needing to know your 2FA passcode.

That cancels out the benefits of 2FA that we listed at the start of the article.

The good news is that although PayPal hasn't yet put in place a complete fix for this flaw (e.g. by making all of the changes above), the researchers report that PayPal has implemented change (2).

Crooks who try to bypass your 2FA by abusing this flaw will no longer be able to get hold of the session_token they need to trick the mobile payment URL into thinking they have logged in properly.

In short, if you are using PayPal 2FA, you may as well continue doing so, because it provides no less security than it did before this disclosure; in fact, it is now more secure than it was.

Also, of course, remember that you shouldn't be letting crooks get hold of your username and password anyway.

Don't let your guard down just because you have enabled 2FA: it's part of defence in depth, not defence instead!

For further information

If you'd like to know more about 2FA, you might like to listen to our Techknow podcast:

Interesting. As a user of PayPal with the 2FA token gizmo I'm pleased the flaw was found and will be fixed.

I don't know specifically about the mobile app but it is possible to log in with the token by simply concatenating it with your password in the password field, so the discovery of the vulnerability looks to be the very definition of serendipitous :)

The lexicographers at the Oxford dictionaries famously prefer -z- to -s-, but my Oxford Dictionary of English does admit of both spellings. (My New Oxford American Dictionary insists on -z-, however.)

I'm used to -s- in organisation, not least from corresponding with offical government organisations that call themselves Organisations. I did write "democratize" above, but I was quoting someone else. I felt that to use [sic] might look petty.

As always, a great explanation Paul, but I was hoping you'd answer a question that's always bugging me. Who creates these protocols? Is the result actually approved by someone else?

It's like the Adobe password disaster from last year that totally disregarded everything in basic password management and cryptography. But this one is even worse in a sense. The Adobe password management "protocol" was pretty straightforward that could have been easily created by someone totally clueless. Here I can feel that someone was trying very hard to create something clever...

If I had to guess, I'd suggest that they may have tried to graft the 2FA stuff "onto the end" of the existing protocol, for backward compatibility reasons.

Then, I'd guess that in testing no-one bothered to check what happened if a malicious client deliberately *changed* the 2fa_enabled setting and ploughed on regardless. You can well imagine how the tests might end up testing only for situations like "2FA-unaware client *ignores* the 2fa_enabled setting", which is the most likely non-malicious real world sort of error.

Yes, I am afraid something like that happened. But this is the kind of design error that should be obvious from any high level protocol description, before any testing. I don't think it's an implementation problem like Heartbleed was.

And what makes it really frightening is that it's coming from Paypal, that should be one of the most secure public sites. And if you tell me they just went ahead and grafted 2FA on their existing login protocol without thinking about it, then I'm even more scared. :)

About the author

Paul Ducklin is a passionate security proselytiser. (That's like an evangelist, but more so!) He lives and breathes computer security, and would be happy for you to do so, too.
Paul won the inaugural AusCERT Director's Award for Individual Excellence in Computer Security in 2009.
Follow him on Twitter: @duckblog