At Facebook, zero-day exploits, backdoor code bring war games drill to life

How do companies prepare for the worst? By exposing workers to lifelike crises.

Early on Halloween morning, members of Facebook's Computer Emergency Response Team received an urgent e-mail from an FBI special agent who regularly briefs them on security matters. The e-mail contained a Facebook link to a PHP script that appeared to give anyone who knew its location unfettered access to the site's front-end system. It also referenced a suspicious IP address that suggested criminal hackers in Beijing were involved.

"Sorry for the early e-mail but I am at the airport about to fly home," the e-mail started. It was 7:01am. "Based on what I know of the group it could be ugly. Not sure if you can see it anywhere or if it's even yours."

Enlarge/ The e-mail reporting a simulated hack into Facebook's network. It touched off a major drill designed to test the company's ability to respond to security crises.

Facebook

Facebook employees immediately dug into the mysterious code. What they found only heightened suspicions that something was terribly wrong. Facebook procedures require all code posted to the site to be handled by two members of its development team, and yet this script somehow evaded those measures. At 10:45am, the incident received a classification known as "unbreak now," the Facebook equivalent of the US military's emergency DEFCON 1 rating. At 11:04am, after identifying the account used to publish the code, the team learned the engineer the account belonged to knew nothing about the script. One minute later, they issued a takedown to remove the code from their servers.

With the initial threat contained, members of various Facebook security teams turned their attention to how it got there in the first place. A snippet of an online chat captures some of the confusion and panic:

Facebook Site Integrity: which means that whoever discovered this is looking at our code

If the attackers were able to post code on Facebook's site, it stood to reason, they probably still had that capability. Further, they may have left multiple backdoors on the network to ensure they would still have access even if any one of them was closed. More importantly, it wasn't clear how the attackers posted the code in the first place. During the next 24 hours, a couple dozen employees from eight internal Facebook teams scoured server logs, the engineers' laptop, and other crime-scene evidence until they had their answer: the engineer's fully patched laptop had been targeted by a zero-day exploit that allowed attackers to seize control of it.

This is only a test

The FBI e-mail, zero-day exploit, and backdoor code, it turns out, were part of an elaborate drill Facebook executives devised to test the company's defenses and incident responders. The goal: to create a realistic security disaster to see how well employees fared at unraveling and repelling it. While the attack was simulated, it contained as many real elements as possible.

The engineer's computer was compromised using a real zero-day exploit targeting an undisclosed piece of software. (Facebook promptly reported it to the developer.) It allowed a "red team" composed of current and former Facebook employees to access the company's code production environment. (The affected software developer was notified before the drill was disclosed to the rest of the Facebook employees). The PHP code on the Facebook site contained a real backdoor. (It was neutralized by adding comment characters in front of the operative functions.) Facebook even recruited one of its former developers to work on the team to maximize what could be done with the access. The FBI e-mail came at the request of Facebook employees in an attempt to see how quickly and effectively various employee teams could work together to discover and solve the problems.

"Internet security is so flawed," Facebook Chief Security Officer Joe Sullivan told Ars. "I hate to say it, but it seems everyone is in this constant losing battle if you read the headlines. We don't want to be part of those bad headlines."

The most recent dire security-related headlines came last week, when The New York Times reported China-based hackers had been rooting through the publisher's corporate network for four months. They installed 45 separate pieces of custom-developed malware, almost all of which remained undetected. The massive hack, the NYT said, was pursued with the goal of identifying sources used to report a story series related to the family of China’s prime minister. Among other things, the attackers were able to retrieve password data for every single NYT employee and access the personal computers of 53 workers, some of which were directly inside the publisher's newsroom.

The hacks allowed the attackers to make off with valuable Google intellectual property and information about dissidents who used the company's services. It also helped coin the term "advanced persistent threat," or APT, used to describe hacks that will last weeks or months targeting a specific organization that possesses assets the attackers covet. Since then, reports of APTs have become a regular occurrence. In 2011, for instance, attackers breached the servers of RSA and stole information that could be used to compromise the security of two-factor authentication tokens sold by the division of EMC. A few months later, defense contractor Lockheed Martin said an attack on its network was aided by the theft of the confidential RSA data relating to its SecurID tokens, which some 40 million employees use to access sensitive corporate and government computer systems.

"That was the inspiration around all this stuff," Facebook Security Director Ryan "Magoo" McGeehan said of the company's drills. "You don't want the first time you deal with that to be real. You want something that you've done before in your back pocket."

Even after employees learned this particular hack was only for practice—about a half hour after the pseudo backdoor was closed—they still weren't told of the infection on the engineer's laptop or the zero-day vulnerability that was used to foist the malware. They spent the next 24 hours doing forensics on the computer and analyzing server logs to unravel that mystery. "Operation Loopback," as the drill was known internally, is notable for the pains it took to simulate a real breach on Facebook's network.

"They're doing penetration testing as it's supposed to be done," said Rob Havelt, director of penetration testing at security firm Trustwave. "A real pen test is supposed to have an end goal and model a threat. It's kind of cool to hear organizations do this."

He said the use of zero-day attacks is rare but by no means unheard of in "engagements," as specific drills are known in pen-testing parlance. He recalled an engagement from a few years ago of a "huge multinational company" that had its network and desktop computers fully patched and configured in a way that made them hard to penetrate. As his team probed the client's systems, members discovered 20 Internet-connected, high-definition surveillance cameras. Although the default administrator passwords had been changed, the Trustwave team soon discovered two undocumented backdoors built into the surveillance cameras' authentication system.

Enlarge/ An image retrieved from high-definition surveillance cameras used by a large company. During a penetration test, Trustwave employees used them to steal "tons" of login credentials.

I know that at my former employer nobody would do this as they would think of it as an unnecessary risk as it could bring down production systems and cost them money. In the short run, yes, it could cost them money, however, in the long run, it will save them money and ensure their systems are more stable and secure.

I guess what I'm getting at is if a company prioritizes short-term profit over long-term profit, it will cost them from things like a hacker getting into their systems and costing them a LOT of money, not to mention their reputation, and even more money then. Unfortunately, execs like to stick their heads in the ground and pretend everything is dandy until something kicks them in the ass.

In security, false alarms to ensure security holes are fixed is a good thing. The harder the path for the hacker, the sooner you can detect him and isolate him and ensure he can do minimal damage at worst.

Good to see someone taking this seriously, especially a company like Facebook which has massive amount of sensitive data and analytics. Their bug bounty program has also been very successful, and they've shown they're not afraid to shell out for substantial discoveries. All in all, despite the despicable privacy practices with their users, they're doing a stand up job keeping their users protected - certainly more than others of their size which fall over to script kiddies exploiting five year old SQLi vulnerabilities.

Oh boy, how network security evolved since I started being interested in it. I still remember the times when reading aleph1's Smashing The Stack For Fun And Profit was all you needed to start hacking away. And retrieving the /etc/passwd file was really all you needed (that's because encrypted passwords were stored in that file - hence the name of the file) since a PC running John The Ripper could crack most passwords in a few days. And then you could telnet to the machine and log in directly as root. Ha ha! Now it sounds like a joke. Today, big groups of well-funded professionals try to protect insanely complex networks from other well-funded groups of professionals. It's an impressive spectacle to be sure, but also very frightening and saddening at the same time. Saddening, because, I think, in spite of all the advances in protecting our networks (yup, /etc/passwd doesn't store passwords anymore) we are even more vulnerable to network attacks that we were 15 years ago. It's safe to assume that most of the popular software we use these days (operating systems, web browsers, etc) have zero-day vulnerabilities known to some people. It's starting to feel like we are losing the battle.

The email from the FBI agent includes (what looks like) a line from a HTTP server access log, presumably from facebook's own servers.

Given that an email from fbi.gov (containing a log entry) didn't seem unusual for the security team, does this mean that the FBI has general access to FB's logs?

Could just as easily have come from Fiddler, a client side web debugging proxy. In fact, when you look at a request header in Fiddler there is a field where you can copy exactly that information and nothing more.

That doesn't mean it came from Fiddler though. It just means that it is common information to be captured during a web request. I would expect it to look like it came from ANY program that logs web requests.

This makes me want to take another look at my network’s security. Would I even know if my home network was compromised?

That may be a fun article: ways to protect your home’s network beyond just using WPA2 for wireless.

Either way, great article Dan!

While not all-encompassing, if you want to see if someone has a machine on your network, you can dig into your router to display all devices connected/current IP leases. Your router may even show past leases assigned with hostnames or MAC. Most routers I've encountered provide this functionality by default but your mileage may vary.

Unfortunately, you are far more likely to encounter malicious software that has been installed on one of your computers that "phones home" to a command and control center than someone physically connecting to your network. At this point you're not really dealing with your "network" but you're looking at software (including software firewalls).

Honestly it's a very tough question you've posed which is why there is so much business in the network security space.

The email from the FBI agent includes (what looks like) a line from a HTTP server access log, presumably from facebook's own servers.

Given that an email from fbi.gov (containing a log entry) didn't seem unusual for the security team, does this mean that the FBI has general access to FB's logs?

I don't know anything specific about facebook, but the federal government has active cybercrime programs / partnerships with large US companies. It is not unusual to have ongoing briefs, info exchanges, and contact with the FBI.

Side story:

I was in line to board a plane to Las Vegas for DefCon last year and the group of people in front of me were all facebook employees (thanks for all wearing the team t-shirt for easy identification). I don't know if they were IT security, but they were talking about how malware is a huge problem for them and consumes a lot of their time and they can't seem to make a dent in it. I thought it was interesting.

I know that at my former employer nobody would do this as they would think of it as an unnecessary risk as it could bring down production systems and cost them money. In the short run, yes, it could cost them money, however, in the long run, it will save them money and ensure their systems are more stable and secure.

I guess what I'm getting at is if a company prioritizes short-term profit over long-term profit, it will cost them from things like a hacker getting into their systems and costing them a LOT of money, not to mention their reputation, and even more money then. Unfortunately, execs like to stick their heads in the ground and pretend everything is dandy until something kicks them in the ass.

Facebook is doing what they should do - test their systems and people. Only realistic testing will find the flaws. Also, if they ever did get sued for a breach they honestly can say they were proactive in trying to find these flaws.

I know that at my former employer nobody would do this as they would think of it as an unnecessary risk as it could bring down production systems and cost them money. In the short run, yes, it could cost them money, however, in the long run, it will save them money and ensure their systems are more stable and secure.

I guess what I'm getting at is if a company prioritizes short-term profit over long-term profit, it will cost them from things like a hacker getting into their systems and costing them a LOT of money, not to mention their reputation, and even more money then. Unfortunately, execs like to stick their heads in the ground and pretend everything is dandy until something kicks them in the ass.

Facebook is doing what they should do - test their systems and people. Only realistic testing will find the flaws. Also, if they ever did get sued for a breach they honestly can say they were proactive in trying to find these flaws.

It is short term money spent to ensure long term health.

And more importantly, will help stave off a future class action suit if your security really isn't up to snuff. By testing meticulously, even in the event you are breached, you have something to fall back on to say that you did your best to avoid it, which may lessen the damages.

This was a fascinating post, and I'm surprised that it hasn't got more comments. On the other hand, it doesn't really leave a lot to comment on, other than "cool!". I'm not a fan of facebook, but here's something they are doing right.

While you're busy with your nose in your computer, you might want to look up to see what's physically going on around you every now and then. Worked at a company 10 years ago with great IT group that kept the servers ultra secure. But, some bozo walked around and stole all the ram out of the computers and sold it for some quick cash. The only way they caught the guy was via cross-referencing badge reader logins, b/c apparently the home office didn't think our off-site campus was worth having video survellance at. The IT guys were sort of embarassed that they had spent so much effort hardening the servers, av, etc that they neglected to put a little $1 lock on every computer to prevent petty theft.

This was a fascinating post, and I'm surprised that it hasn't got more comments. On the other hand, it doesn't really leave a lot to comment on, other than "cool!". I'm not a fan of facebook, but here's something they are doing right.

Oh boy, how network security evolved since I started being interested in it. I still remember the times when reading aleph1's Smashing The Stack For Fun And Profit was all you needed to start hacking away. And retrieving the /etc/passwd file was really all you needed (that's because encrypted passwords were stored in that file - hence the name of the file) since a PC running John The Ripper could crack most passwords in a few days. And then you could telnet to the machine and log in directly as root. Ha ha! Now it sounds like a joke. Today, big groups of well-funded professionals try to protect insanely complex networks from other well-funded groups of professionals. It's an impressive spectacle to be sure, but also very frightening and saddening at the same time. Saddening, because, I think, in spite of all the advances in protecting our networks (yup, /etc/passwd doesn't store passwords anymore) we are even more vulnerable to network attacks that we were 15 years ago. It's safe to assume that most of the popular software we use these days (operating systems, web browsers, etc) have zero-day vulnerabilities known to some people. It's starting to feel like we are losing the battle.

Am I reading this correctly? You're a pimple-faced jagoff with a less-than-stellar social media company as attested to by a very weak IPO followed by being bent-over ever since... and you have your own personal FBI agent at the tax payers expense? Really?