Staying positive about false negatives

Disclaimer

For the past four years I've been kicking ass at Securify. These 40-hour hacking sprees resulted in hundreds of vulnerabilities that made a lot of customers very happy. But "even I" make mistakes. Sometimes.

Introduction

When doing a pentest I have to make assumptions. Most of the time my notes are in the tens of possible security vulnerabilities. For example, when doing a white-box pentest the following code-properties might be triggers:

On these triggers I make quick notes of possible targets to validate them later on using hands-on testing. The same work-flow applies in the other direction. If I encounter a file upload that seems to put files in a web folder, I make a note: "check how files are handled in this controller". Most of these are false positives; they look like vulnerabilities, but get mitigated by some other software layer.

This blog article is about two examples of false negatives. Things that didn't seem buggy but were. Bugs that seemed to be fixed but weren't. Some nasty stuff that I hope we all can learn from. For obvious reasons I can't share actual customer code, so I'll give some examples in partial pseudo-code.

Payment business logic

An API endpoint would kick off the shipping of a package if a user had paid for it. The involved code went a little something like this:

validatepayment(paymentID) {

payment = doBackendCall(paymentID);

if (payment.status == done) {
return success;
}

if (payment.status == pending) {
return success;
}

return false;
}

With just these lines, the flaw might seem obvious. But it reads like business logic. Apparently, this was supposed to be the case. But it seemed strange, and I did want to double-check it, because it was an important part of the application.

Just to be sure I ran a payment and pressed cancel immediately after. "Payment failed". Neat.

Another test: I made a payment, and while performing the payment I spammed the API endpoint to check if it would trigger a shipping (since the payment was pending). Again, no cheese. Didn't work.

At this point I thought I'd misunderstood the code. "Guess the programmer knows something I don't. It doesn't work like I read it. Good thing I do these hands-on tests to verify these false positives."

A couple of weeks later we got a call from a customer. An internal functional tester found a bug. He could make a payment, press cancel, and his order was shipped! How could this simplest of bypasses slip through a penetration test?

As it turned out, the testing environment - which was acceptance - would never reach a state where the status of a payment is "pending"! Acceptance and production were behaving different. The testing environment of the payment provider would return different statuses. Ouch.

So why was this weird condition put in the code? The programmer put it in because it would make certain credit card payments "go faster". Normally it would take some time to verify the credit card, but now the application would pretend everything went well and just accept the payment.

The vulnerability was found because one bank didn't conform to the standards. Most banks would return "canceled" when a customer clicked cancel. However, one of the banks actually returned "pending" for some weird reason. Those two corner cases overlapped, which introduced this vulnerability. It turns out the issue could only have been found/reproduced in production while making an iDeal payment with one specific Dutch bank!

So, that didn't go well. Besides from the behavior corner cases that caused prod and acc to be different, I also failed: A payment that is pending should never be treated as successful. This should have just gone in the report as one of those "We can't exploit this but it doesn't feel right" findings everyone so deeply enjoys on a Friday afternoon. Or maybe I should have contacted the developer to discuss it. Which happens a lot, but not for this one, since all hands-on tests succeeded. Think outside the time-box.

Magic file extensions

We owned their file upload functionality multiple times in the past. But they finally managed to secure their file upload by checking both the file signature and its extension.

After a user uploaded a file they would validate it and put it in a web directory. Even though our recommendations said to generate custom filenames based on an expected (white-listed) extension. In the best case, they would parse the image data as an image, save it to the database, and return it based on an identifier. But they did none of that. They went with our first, short term recommendation to check the file's signature and extension.
Another recommendation stated that, if they went with this short-term solution, they should disable PHP execution for the entire upload directory. But they did no such thing. For the file upload was secure, wasn't it? Who needs in-depth security when your first security layer works?

The application was build using Laravel (a PHP framework). The code went a little something like this.

This worked! Or I should say: after we found a bypass for the checks a couple of times it finally worked. It was secure now.

A year or two later I had to check a code diff and pentest all new important functionality. I think we were talking hundreds of commits and a few days to check them. One of the commits changed the getExtension function, which "fixed some errors". It now looked like this:

getExtension(file) {
return file->extension();
}

Looks good right? I thought: Nice, they found a cleaner way to get the file extension. Tight code man. Good job. I mean, I know the file upload is 'secure' as long as they do a file extension check. And there it was. So I continued with the next commit.

As it turned out, calling file->extension in Laravel will try to guess the extension base on its mime type (it will call guessExtension)! I would never have guessed. Badumtss.

So now they had two checks for the mime type, and nothing checked the file extension. A couple of months later someone found the vulnerability, which made me look pretty lame.

TL;DR: Recommendations

For companies

- A pentester is not some kind of human firewall. This is wrong: "We don't need to think about security: we do pentests". Security personnel, procedures, reviews and software are simply not perfect. A penetration test is a way to measure the security level of your application. You should incorporate findings from these tests in your SDLC to improve the long-term quality of your security. Don't just monkeypatch it up and keep making the same mistakes.

- In-depth security. Sometimes it can take no more than half an hour. Think IP-whitelisting, removing unused services, setting some security flags here and there, etc. Just do it, and have it noted in your procedures to always do it. Deploying a new server? Get your checklist of security settings and spend half an hour setting it up right. Many times I've been unable to exploit vulnerabilities because I got blocked by some in-depth security layer.

- Create unit- or integration tests to make sure the production environment behaves the same as in acceptance.

For pentesters

- It's wise to always email developers or product owners when logic or code-flows seem strange. Just call them or send them an email. Even if they fail to reply before your deadline.

- Double-check things, even if you pentested some functionality before. Things aren't always what they seem. Like a function called extension, that doesn't look at the extension.

Staying positive

This article is called "Staying positive about false negatives", so where is the part about staying positive? It's not here. You should always try to stay positive. Don't let failure get you down; everyone does it. Learn from mistakes and all that jazz. Party on.