Saturday, January 30, 2010

I expect many readers will recognize the image at left as representing part of the final space battle in Star Trek II: The Wrath of Khan. During this battle, Kirk and Spock realize Khan's tactics are limited. Khan is treating the battle like it is occuring on the open seas, not in space. Spock says:

He is intelligent, but not experienced. His pattern indicates two-dimensional thinking.

I though this quote could describe many of the advanced persistent threat critics, particularly those who claim "it's just espionage" or "there's nothing new about this." Consider this one last argument to change your mind. (Ha, like that will happen. For everyone else, this is how I arrive at my conclusions.)

I think the problem is APT critics are thinking in one or two dimensions at most, when really this issue has at least five. When you only consider one or two dimensions, of course the problem looks like nothing new. When you take a more complete look, it's new.

Offender. We know who the attacker is, and like many of you, I know this is not their first activity against foreign targets. I visited the country as an active duty Air Force intelligence officer in 1999. I got all the briefings, etc. etc. This is not the first time I've seen network activity from them. Wonderful.

Defender. We know the offender has targeted national governments and militaries, like any nation-state might. What's different about APT is the breadth of their target base. Some criticize the Mandiant report for saying:

The APT isn't just a government problem; it isn't just a defense contractor problem. The APT is everyone's problem. No target is too small, or too obscure, or too well-defended. No organization is too large, two well-known, or too vulnerable. It's not spy-versus-spy espionage. It's spy-versus-everyone.

The phrasing here may be misleading (i.e., APT is not attacking my dry cleaner) but the point is valid. Looking over the APT target list, the victims cover a broad sweep of organizations. This is certainly new.

Means. Let's talk espionage for a moment. Not everyone has the means to be a spy. You probably heard how effective the idiots who tried bugging Senator Landrieu's office were. With computer network exploitation (at the very least), those with sufficient knowledge and connectivity can operate at nearly the same level as a professional spy. You don't have to spend nearly as much time teaching tradecraft for CNE, compared to spycraft. You can often hire someone with private experience as a red teamer/pen tester and then just introduce them to your SOPs. Try hiring someone who has privately learned national-level spycraft.

Motive. Besides "offender," this is the second of the two dimensions that APT critics tend to fixate upon. Yes, bad people have tried to spy on other people for thousands of years. However, in some respects even this is new, because the offender has his hands in so many aspects of the victim's centers of power. APT doesn't only want military secrets; it wants diplomatic, AND economic, AND cultural, AND...

Opportunity. Connectivity creates opportunity in the digital realm. Again, contrast the digital world with the analog world of espionage. It takes a decent amount of work to prepare, insert, handle, and remove human spies. The digital equivalent is unfortunately still trivial in comparison.

To summarize, I think a lot of APT critics are focused on offender and motive, and ignore defender, means, and opportunity. When you expand beyond two-dimensional thinking, you'll see that APT is indeed new, without even considering technical aspects.

The Obama administration announced the sale Friday of $6 billion worth of Patriot anti-missile systems, helicopters, mine-sweeping ships and communications equipment to Taiwan in a long-expected move that sparked an angry protest from China.

In a strongly worded statement on Saturday, China's Defense Ministry suspended military exchanges with the United States and summoned the U.S. defense attache to lodge a "solemn protest" over the sale, the official Xinhua news agency reported.

"Considering the severe harm and odious effect of U.S. arms sales to Taiwan, the Chinese side has decided to suspend planned mutual military visits," Xinhua quoted the ministry as saying. The Foreign Ministry said China also would put sanctions on U.S. companies supplying the equipment.

It would have been interesting if the Obama administration had announced its arms sale in these terms:

"Considering the severe harm and odious effect of the advanced peristent threat, the American side has decided to sell the following arms to Taiwan."

It's time for the information security community to realize this problem is well outside our capability to really make a difference, without help from our governments.

If you want to read a concise yet informative and clue-backed report on advanced persistent threat, I recommend completing this form to receive the first Mandiant M-Trends report.

Mandiant occupies a unique position with respect to this problem because they are one of only two security service companies with substantial counter-APT consulting experience.

You may read blog posts and commentary from other security service providers who either 1) suddenly claim counter-APT expertise or 2) deride "APT" as just a marketing term, or FUD, or some other term to hide their inexperience with this problem. The fact remains that, when organizations meet in closed forums to do real work on this problem, the names and faces are fairly constant. They don't include those trying to make an APT "splash" or those pretending APT is not a real problem.

Mandiant finishes its report with the following statement:

[T]his is a war of attrition against an enemy with extensive resources. It is a long fight, one that never ends. You will never declare victory.

I can already hear the skeptics saying "It never ends, so you can keep paying Mandiant consulting fees!" or "It never ends, so you can keep upgrading security products!" You're wrong, but nothing I say will convince some of you. The fact of the matter is that until the threat is addressed at the nation-state to nation-state level, victim organizations will continue to remain victims. This is not a problem that is going to be solved by victims better defending themselves. The cost is simply too great to take a vulnerability-centric approach. We need a threat-centric approach, where those with the authority to apply pressure on the threat are allowed to do so, using a variety of instruments of national power. This is the unfortunate reality of the conflict in which we are now engaged.

Wednesday, January 27, 2010

I had fairly high hopes for Professional Penetration Testing (PPT). The book looks very well organized, and it is published in the new Syngress style that is a big improvement over previous years. Unfortunately, PPT should be called "Professional Pen Testing Project Management." The vast majority of this book is about non-technical aspects of pen testing, with the remainder being the briefest overview of a few tools and techniques. You might find this book useful if you either 1) know nothing about the field or 2) are a pen testing project manager who wants to better understand how to manage projects. Those looking for technical content would clearly enjoy a book like Professional Pen Testing for Web Applications by Andres Andreu, even though that book is 3 years older and focused on Web apps.

This is my 300th Amazon.com book review. I wish I had planned the review schedule such that I reviewed a five star book for number 300.

At least three US oil companies were the target of a series of previously undisclosed cyberattacks that may have originated in China and that experts say highlight a new level of sophistication in the growing global war of Internet espionage.

The oil and gas industry breaches, the mere existence of which has been a closely guarded secret of oil companies and federal authorities, were focused on one of the crown jewels of the industry: valuable “bid data” detailing the quantity, value, and location of oil discoveries worldwide...

The companies – Marathon Oil, ExxonMobil, and ConocoPhillips – didn’t realize the full extent of the attacks, which occurred in 2008, until the FBI alerted them that year and in early 2009. Federal officials told the companies proprietary information had been flowing out, including to computers overseas...

“What these guys [corporate officials] don’t realize, because nobody tells them, is that a major foreign intelligence agency has taken control of major portions of their network,” says the source familiar with the attacks. “You can’t get rid of this attacker very easily. It doesn’t work like a normal virus. We’ve never seen anything this clever, this tenacious...”

Many experts say the theft of this kind of information – about, for instance, the temperature and valve settings of chemical plant processes or the source code of a software company – can give competitors an advantage, and over time could degrade America’s global economic competitiveness...

Even more basic, many corporate executives aren’t aware of how sophisticated the new espionage software has become and cling to outdated forms of electronic defense...

[B]ased on the kind of information that was being stolen, federal officials said a key target appeared to be bid data potentially valuable to “state-owned energy companies...”

China would certainly be interested in this kind of data, experts say. With the country’s economy consuming huge amounts of energy, China’s state-owned oil companies have been among the most aggressive in going after available leases around the world, particularly in Nigeria and Angola, where many US companies are also competing for tracts...

“What I’m saying to you is that it’s not just the oil and gas industry that’s vulnerable to this kind of attack: It’s any industry that the Chinese decide they want to take a look at,” says an FBI source. “It’s like they’re just going down the street picking out what they want to have.”

Monday, January 25, 2010

Finally, the larger problem is that it only took one exploit to compromise these organizations. One exploit should never ruin you day. [sic]

No, that is wrong. The larger problem is not that it "only took one exploit to compromise these organizations." I see this mindset in many shops who aren't defending enterprises on a daily basis. This point of view incorrectly focuses on exploitation as a point-in-time, "skirmish" event, disconnected from the larger battle or the ultimate campaign.

The real "larger problem" is that the exploit is only part of a campaign, where the intruder never gives up. In other words, comprehensive threat removal is the problem. There is no "cleaning," or "disinfecting," or "recovery" at the battle or campaign level. You might restore individual assets to a semi-trustworthy state, but the advanced persistent threat only cares that they can maintain long-term access to the environment.

If the problem were simply defending against a compromised asset, we would not still be talking about this issue. Rather, the problem is that it is exceptionally difficult, if not impossible, to remove this threat. Individual exploits add to the problem but they are only skirmishes.

Sunday, January 24, 2010

Good network troubleshooting books are rare. TCP/IP Analysis and Troubleshooting Toolkit by Kevin Burns (2003), Troubleshooting Campus Networks by Priscilla Oppenheimer and Joseph Bardwell (2002), and Network Analysis and Troubleshooting by Scott Haugdahl (1999) come to mind. Network Maintenance and Troubleshooting Guide (NMATG) brings a whole new dimension to network analysis, particularly at the lowest levels of the OSI model. I found topics covered in NMATG that were never discussed in other books. While not for every networking person, NMATG is a singular reference that belongs on a network professional's shelf.

Jim Manicoinvited me to speak on the OWASP Podcast. If you'd like me to try answering specific questions, please email them to podcast at owasp.org. When the show is posted I will let everyone know here. Thank you.

Attribution means identifying the threat, meaning the party perpetrating the attack. Attribution is not just malware analysis. There are multiple factors that can be evaluated to try to attribute an attack.

Timing. What is the timing of the attack, i.e., fast, slow, in groups, isolated, etc.?

Victims or targets. Who is being attacked?

Attack source. What is the technical source of the attack, i.e., source IP addresses, etc.?

Delivery mechanism. How is the attack delivered?

Vulnerability or exposure. What service, application, or other aspect of business is attacked?

Exploit or payload. What exploit is used to attack the vulnerability or exposure?

Weaponization technique. How was the exploit created?

Post-exploitation activity. What does the intruder do next?

Command and control method. How does the intruder establish command and control?

Command and control servers. To what systems does the intruder connect to conduct command and control?

Tools. What tools does the intruder use post-exploitation?

Persistence mechanism. How does the intruder maintain persistence?

Propagation method. How does the intruder expand control?

Data target. What data does the intruder target?

Data packaging. How does the intruder package data for exfiltration?

Exfiltration method. How does the intruder exfiltrate data?

External attribution. Did an external agency share attribution data based on their own capabilities?

Professionalism. How professional is the execution, e.g., does keystroke monitoring show frequent mistakes, is scripting used, etc.?

Variety of techniques. Does the intruder have many ways to accomplish its goals, or are they limited?

Scope. What is the scope of the attack? Does it affect only a few systems, many systems?

As you can see, there are many characteristics than can be assessed in order to determine if an incident is likely caused by a certain party. Mature security shops use profiles like this to make their own intelligence assessments, often confidentially collaborating with others sharing the same problems.

Thursday, January 21, 2010

We'd like to ask for your help. We're in the process of preparing a major funding proposal for improving Bro, focused on: improving the end-user experience (things like comprehensive documentation, polishing rough edges, fixing bugs); and improving performance.

This looks like a potentially excellent opportunity. However, a major element of winning the funding is convincingly demonstrating to the funders that Bro is already well-established across a large & diverse user community.

To develop that framing, we'd like to ask as many of you folks as possible to fill out the small questionaire below. Please send the replies to Robin personally, not to the list (just replying to this mail should do the right thing). Assuming sufficient feedback, we'll post an anonymized summary to the list.

(Of course we already know about many of you, but collecting this information more systematically will allow us to put together a better overall view of the Bro community.)

In a recent Tweet I recommended reading Joe Stewart's insightful analysis of malware involved in Google v China. Joe's work is stellar as always, but I am reading more and more commentary that shows many people don't have the right frame of reference to understand this problem.

In brief, too many people are focusing on the malware alone. This is probably due to the fact that the people making these comments have little to no experience with the broader problems caused by advanced persistent threat. It's enough for them to look at the malware and then move to the next sample, or devise their next exploit, and so on. Those of us responsible for defending an enterprise can't just look at the problem from a malware, or even a technical, perspective.

[I]t is tempting to think Waziristan has hardly changed since those colonial days... Mostly, [the Pakistani Frontier Corps] discuss their belief that India is behind the current troubles on the frontier. Lieutenant-Colonel Tabraiz Abbas, just in from fighting the Mehsud militants, describes finding Indian-made arms on the battlefield. Substitute “Russian” for “Indian” and you have the standard British Great-Game gripe. As late as 1930, a senior British official, in dispatches stored in India’s national archives, reported that a clutch of Russian guns had been found in Waziristan: “Of these 36 are stamped with the ‘Hammer and Sickle’ emblem of the Soviet government, while one is an English rifle bearing the Czarist crest.”

Imagine if policy decisions were made on "rifle analysis" alone. Think of the havoc that an interloper could introduce by scattering weapons from other armies where a target of psychological operations would find them.

In summary, malware analysis is definitely an important part of attribution, but it's not the only part. Malware analysis is also not the only relevant aspect of Google v China. If you address the malware you won't solve the problem. The same goes for any vulnerabilities discovered during this event.

Wednesday, January 20, 2010

@taosecurity blog post request. Signs that an individual or organization is or may be an APT target. + other threat naming conventions

Tough but great questions. I better answer, or Jeremiah will find me and apply Brazilian Jiu Jitsu until I do. Let me take the second question first.

As I mentioned in Real Threat Reporting in 2005, "Titan Rain" became the popular term for one "intrusion set" involving certain actors. DoD applies various codewords to intrusion sets, and Titan Rain became popular with the publication of the Time article I referenced. If you read the Time article again you'll see at least one other reference, but I won't cite that here.

Some of you may remember "Solar Sunrise" from 1998 and "Moonlight Maze" from 1998-1999. Open reporting links the former to Russia and the latter to an Israeli named Ehud Tenenbaum. These are other examples of "intrusion sets," but they are not related to the current threat.

As far as other names for APT, they exist but are not shared with the public. Just as you might maintain code names for various intrusion sets or campaigns within your CIRT, various agencies track the same using their own terms. This can cause some confusion when different CIRTs try to compare notes, since none of us speak of the private names unless in an appropriate facility. The Air Force invented "APT" as an unclassified term that could be used to quickly keep various parties on the same page when speaking with defense partners.

Regarding who may be an APT target, I liked Steven Adair's Shadownserver post. The way most organizations learn that they have a problem is by receiving an external notification. The FBI and certain military units have been fairly active in this respect for the previous three years. This marks quite a change in the relationship between the US government and private sector, and it's not limited to American companies. A little searching will reveal reports of other governments warning their companies of similar problems.

If your organization has not been contacted by an external agency, you might want to look at the potential objectives that I posted in What is APT and What Does It Want? Does your organization possess data that falls into one of the political, economic, technical, or military categories that could interest this sort of threat? Overall, my assessment of APT progress can be summarized this way:

Probably the next best way to determine if you are a target is to join whatever industry groups you can find and network with your peers. Develop relationships such that your peers feel comfortable sharing threat information with you. Do the same with government actors, especially the FBI. Many times these agencies are just sitting on data trying to figure out the right contacts.

I would beware of organizations that claim any product they sell will "stop APT" or "manage APT" or act as another silver bullet. We're already seeing some vendors jump on the counter-APT bandwagon with little clue what is happening. There's a couple consultancies with deep knowledge on this topic. I'm not going to name them here but if you review the Incident Detection Summit 2009 agenda you can find them.

The degree of counter-APT experience on the speaker list varies considerably, but you can try using that list to validate if Company X has any relationship whatsoever to this problem. That doesn't mean companies or organizations not listed as speakers are "clueless;" a lot of counter-APT activity is simply "good IT." However, you shouldn't expect a random consultant to be able to sit down and explain the specifics of this problem to your CIO or CEO. Incidentally this is NOT a commercial for my company; I run an internal CIRT that only protects our assets.

Jeff Carr is a great digital security intelligence analyst and I've been fortunate to hear him speak several times. We've also separately discussed the issues he covers in Inside Cyber Warfare (ICW). While I find Jeff's insights very interesting and valuable, I think his first book could have been more coherent and therefore more readable. I believe Jeff should write a second edition that is more focused and perhaps more inclusive.

Registration is now open. Black Hat set five price points and deadlines for registration.

Super early ends 1 Feb

Early ends 1 Mar

Regular ends 1 Apr

Late ends 11 Apr

Onsite starts at the conference

Seats are filling -- it pays to register early!

If you review the Sample Lab I posted earlier this year, this class is all about developing an investigative mindset by hands-on analysis, using tools you can take back to your work. Furthermore, you can take the class materials back to work -- an 84 page investigation guide, a 25 page student workbook, and a 120 page teacher's guide, plus the DVD. I have been speaking with other trainers who are adopting this format after deciding they are also tired of the PowerPoint slide parade.

Feedback from my 2009 sessions was great. Two examples:

"Truly awesome -- Richard's class was packed full of content and presented in an understandable manner." (Comment from student, 28 Jul 09)

"In six years of attending Black Hat (seven courses taken) Richard was the best instructor." (Comment from student, 28 Jul 09)

If you've attended a TCP/IP Weapons School class before 2009, you are most welcome in the new one. Unless you attended my Black Hat training in 2009, you will not see any repeat material whatsoever in TWS2. Older TWS classes covered network traffic and attacks at various levels of the OSI model. TWS2 is more like a forensics class, with network, log, and related evidence.

Saturday, January 16, 2010

This has been the week to discuss the advanced persistent threat, although some people are already telling me Google v China with respect to APT is "silly," or that the attack vectors were what everyone has been talking about for years, and were somewhat sloppily orchestrated at that.

I think many of these critics are missing the point. As is often the case with sensitive issues, 1) those who know often can't say and 2) those who say often don't know. There are some exceptions worth noting!

One company that occupies a unique position with respect to this problem is Mandiant. Keep an eye on the APT tag of their M-unition blog. Mandiant's role as a consulting firm to many APT victims helps them talk about what they see without naming any particular victim.

I also recommend following Mike Cloppert's posts. He is a deep thinker with respect to counter-APT operations. Incidentally I agree with Mike that the US Air Force invented the term "advanced persistent threat" around 2006, not Mandiant.

Reviewing my previous blogging, a few old posts stand out. 4 1/2 years ago I wrote Real Threat Reporting, describing the story of Shawn Carpenter as reported by Time magazine. Back then the threat was called "Titan Rain" by Time. (This reflects the use of a so-called "intrusion set" to describe an incident.) Almost a year later Air Force Maj Gen Lord noted"China has downloaded 10 to 20 terabytes of data from the NIPRNet. They're looking for your identity, so they can get into the network as you."

Now we hear of other companies beyond Google involved in this latest incident, including Yahoo, Symantec, Adobe, Northrop Grumman, Dow Chemical, Juniper Networks, and "human rights groups as well as Washington-based think tanks." (Sources 1 and 2.)

Let me put on the flight cap of a formally trained Air Force intelligence officer and try to briefly explain my understanding of APT in a few bullets.

Advanced means the adversary can operate in the full spectrum of computer intrusion. They can use the most pedestrian publicly available exploit against a well-known vulnerability, or they can elevate their game to research new vulnerabilities and develop custom exploits, depending on the target's posture.

Persistent means the adversary is formally tasked to accomplish a mission. They are not opportunistic intruders. Like an intelligence unit they receive directives and work to satisfy their masters. Persistent does not necessarily mean they need to constantly execute malicious code on victim computers. Rather, they maintain the level of interaction needed to execute their objectives.

Threat means the adversary is not a piece of mindless code. This point is crucial. Some people throw around the term "threat" with reference to malware. If malware had no human attached to it (someone to control the victim, read the stolen data, etc.), then most malware would be of little worry (as long as it didn't degrade or deny data). Rather, the adversary here is a threat because it is organized and funded and motivated. Some people speak of multiple "groups" consisting of dedicated "crews" with various missions.

Looking at the target list, we can perceive several potential objectives. Most likely, the APT supports:

Political objectives that include continuing to suppress its own population in the name of "stability."

Economic objectives that rely on stealing intellectual property from victims. Such IP can be cloned and sold, studied and underbid in competitive dealings, or fused with local research to produce new products and services more cheaply than the victims.

Technical objectives that further their ability to accomplish their mission. These include gaining access to source code for further exploit development, or learning how defenses work in order to better evade or disrupt them. Most worringly is the thought that intruders could make changes to improve their position and weaken the victim.

"This wasn't in my opinion ground-breaking as an attack. We see this fairly regularly," said Mikko Hypponen, of security firm F-Secure.

"Most companies just never go public," he added.

In some ways this comment is true, and in other ways I think it can mislead some readers. I believe it is true in the sense that many organizations are dealing with advanced persistent threats. However, I believe this comment leads some readers to focus incorrectly on two rather insignificant aspects of the Google incident: vulnerabilities and malware.

On the vulnerability front, we have a zero-day in Internet Explorer. I agree that this is completely routine, in a really disappointing way.

On the malware front, we have code submitted to Wepawet. I agree that this is also not particularly interesting, although I would like to know how it ended up being posted there!

Five issues make Google v China different for me.

The victim made a public statement about the intrusion. I read that this was a difficult decision to make and it took strong leadership to see it through:

Google Inc.'s startling threat to withdraw from China was an intensely personal decision, drawing its celebrated founders and other top executives into a debate over the right way to confront the issues of censorship and cyber security.

Google's very public response to what it called a "highly sophisticated and targeted attack on our corporate infrastructure originating from China" was crafted over a period of weeks, with heavy involvement from Google's co-founders, Larry Page and Sergey Brin.

The victim is not alone. Google isn't alone in the sense that firms suffering from Conficker last month weren't alone, i.e., this isn't a case of widespread malware. Instead, we're hearing that multiple companies are affected.

The victim is not a national government. Don't forget all the China incidents involving national governments that I followed from summer 2007 through 2008.

The victim could suffer further damage as a result of this statement and decision. Every CIO, CTO, CSO, and CISO magazine in the world talks about "aligning with business," blah blah. Business is supposed to rule. Instead, we have a situation where the self-reported "theft of intellectual property from Google" plus "accessing the Gmail accounts of Chinese human rights activists" resulted in a business decision to alter and potentially cancel operations. That astounds me. You can claim Baidu is beating Google, but I don't buy it as the real reason Google is acting like this.

Every so often I receive questions from blog readers. The latest centered on the following question:

What level and extent should a security team and investigators be allowed to operate without having to ask for permission?

This is an excellent question, and as with most issues of authority it depends on the organization, its history, culture, purpose, and people.

From the perspective of the security team, I tend to want as much access as is required to determine the security state of an asset. That translates into being able to access or discover evidence as quickly and independently as possible, preferably in a way that involves no human intervention aside from the query by the security team. When the security analyst can retrieve the information needed to make a decision without asking for human permission or assistance, I call that self-reliant security operations. Anything short of that situation is suboptimal but not uncommon.

Simultaneously, I want the least amount of access needed to do the work. If the security team can get what it needs with a read-only mechanism, so much the better. I actively avoid powerful or administrative accounts. Possessing such accounts is usually an invitation to being blamed for a problem.

Assume then that there is a situation where the security team believes it needs a certain elevated level of access in order to do its mission. In my experience, it is rare to obtain that permission by making some sort of intellectual or process-oriented argument. Rather, the security team should make a plan and justify the need for such access, but wait for an intrusion to occur that demonstrates why elevated access would improve the incident detection and response process.

In many cases, management with authority to grant or expedite granting access lacks the focus or mental environment ready to think about making changes until an incident rocks their world. Once management is ready to devote attention to a problem, they are often eager to hear of changes that would improve the situation. At that point one should make a case for the new capability. We see this pattern repeatedly in high-profile security cases; airline travel is the most obvious.

Aside from waiting for a catastrophe, the next-best option is to collect some sort of metric that shows how the current suboptimal state of affairs should be unacceptable to management. If you could show a substantial decrease in response time, an increase in capability, a decrease in cost, etc., you might be able to convince management to make a change without resorting to an incident scenario. This second option is less likely to work than the disaster method, but at the very least it does lay useful groundwork prior to an incident.

Registration is now open. Black Hat set five price points and deadlines for registration, but only these three are left.

Regular ends 15 Jan

Late ends 30 Jan

Onsite starts at the conference

Seats are filling -- it pays to register early!

If you review the Sample Lab I posted earlier this year, this class is all about developing an investigative mindset by hands-on analysis, using tools you can take back to your work. Furthermore, you can take the class materials back to work -- an 84 page investigation guide, a 25 page student workbook, and a 120 page teacher's guide, plus the DVD. I have been speaking with other trainers who are adopting this format after deciding they are also tired of the PowerPoint slide parade.

Feedback from my 2009 sessions was great. Two examples:

"Truly awesome -- Richard's class was packed full of content and presented in an understandable manner." (Comment from student, 28 Jul 09)

"In six years of attending Black Hat (seven courses taken) Richard was the best instructor." (Comment from student, 28 Jul 09)

If you've attended a TCP/IP Weapons School class before 2009, you are most welcome in the new one. Unless you attended my Black Hat training in 2009, you will not see any repeat material whatsoever in TWS2. Older TWS classes covered network traffic and attacks at various levels of the OSI model. TWS2 is more like a forensics class, with network, log, and related evidence.

I will also be teaching in Barcelona and Las Vegas, but I will announce those dates later.

I strongly recommend attending the Briefings on 2-3 Feb. Maybe it's just my interests, but I find the scheduled speaker list to be very compelling.

Tuesday, January 12, 2010

Adobe became aware on January 2, 2010 of a computer security incident involving a sophisticated, coordinated attack against corporate network systems managed by Adobe and other companies. We are currently in contact with other companies and are investigating the incident.

Let's assume, due to language and news timing, that it's also APT. Would would APT exploit Adobe? Am I giving Adobe too much credit if I hypothesize that APT wanted to know more about Adobe's product security plans, in order to continue exploiting Adobe's products?

If that is the case, who else might APT infiltrate? Should we start looking for similar announcements from other software vendors?

I'm wondering if China has crossed a line with its Google hack. It's relatively easy for the Obama administration to pretend that nothing's amiss when it's playing politics with the Chinese government. But when an American company that was just named "word of the decade" proclaims to the world that it is being exploited by Chinese intruders, can the President turn a blind eye to that? This could be the first publicity-driven incident (i.e., something that comes from public sources) that the new Cyber Czar will have to address, if not higher officials.

Oh, and expect China to issue a statement saying that it strongly denies official involvement, and that it prosecutes "hackers" to the fullest extent of its laws. That's nice.

After posting Google v China I realized this is a showdown like no other. In my experience, no one "ejects" the advanced persistent threat. If you think they are gone, it's either 1) because they decided to leave or 2) you can't find them.

Now we hear Google is the latest victim. Google is supposed to be a place where IT is so awesome and employees so smart that servers basically run themselves, and Google's HR has to leave some of the other smart people "in place" to help the rest of us cope with life. Could Google be the first company to remove APT despite APT desire to remain persistent? Google v China could be Mechagodzilla v Godzilla. No one without inside knowledge will know how this battle concludes, and it probably will not conclude until one of the combatants is gone.

In mid-December, we detected a highly sophisticated and targeted attack on our corporate infrastructure originating from China that resulted in the theft of intellectual property from Google...

First, this attack was not just on Google. As part of our investigation we have discovered that at least twenty other large companies from a wide range of businesses--including the Internet, finance, technology, media and chemical sectors--have been similarly targeted...

These attacks and the surveillance they have uncovered--combined with the attempts over the past year to further limit free speech on the web--have led us to conclude that we should review the feasibility of our business operations in China. We have decided we are no longer willing to continue censoring our results on Google.cn, and so over the next few weeks we will be discussing with the Chinese government the basis on which we could operate an unfiltered search engine within the law, if at all. We recognize that this may well mean having to shut down Google.cn, and potentially our offices in China.

I have to really applaud Google for saying they might shut down operations in a country of 1.4 billion potential consumers as a result of an incident detection and response!

There were many events last year that fulfilled my prediction for 2009Expect at least one cloud security incident to affect something you value. I think this one wins hands down.

Never mind the China angle for a moment. All of us should stop and consider what sort of data we are storing at Google, and in what form that data is stored. Google's Keeping Your Data Safe post for Enterprise customers claims While some intellectual property on our corporate network was compromised, we believe our customer cloud-based data remains secure. However, my experience with these sorts of incidents is that if it occurred in "mid-December," Google will be spending the next several months realizing how large the exposure really is.

Friday, January 08, 2010

Today, 8 January 2010, is the 7th birthday of TaoSecurity Blog. I wrote my first post on 8 January 2003 while working as an incident response consultant for Foundstone. 2542 posts (averaging 363 per year) later, I am still blogging.

I don't have any changes planned here. I plan to continue blogging, especially with respect to network security monitoring, incident detection and response, network forensics, and FreeBSD when appropriate. I especially enjoy reading your comments and engaging in informed dialogues. Thanks for joining me these 7 years -- I hope to have a ten year post in 2013!

The image shows Elvis training with Ed Parker, founder of American Kenpo. As I like to tell my students, Elvis' stance is so wide it would take him a week to react to an attack. Then again, he's Elvis.

I studied Kenpo in San Antonio, TX and would like to return to practicing, along with ice hockey, if my shoulders cooperate!

[T]here's an ugly truth that DLP vendors don't like to talk about: Managing DLP on a large scale can drag your staff under like a concrete block tied to their ankles.

This is important, and Randy explains why in the rest of the article.

Before you fire off your first scan to see just how much sensitive data is floating around the network, you'll need to create the policies that define appropriate use of corporate information.

This is a huge issue. Who is to say just what activity is "authorized" or "not authorized" (i.e., "business activity" vs "information security incident")? I have seen a wide variety of activities that scream "intrusion!" only to hear, "well, we have a business partner in East Slobovistan who can only accept data sent via netcat in the clear." Notice I also emphasized "who." It's not just enough to recognize badness; someone has to be able to classify badness, with authority.

Once your policies are in order, the next step is data discovery, because to properly protect your data, you must first know where it is.

Good luck with this one. When you solve it at scale, let me know. This is actually the one area where I think "DLP" can really be rebranded as an asset discovery system, where the asset is data. I'd love to have a DLP deployment just to find out what is where and where it goes, under normal conditions, as perceived by the DLP product. That's a start at least, and better than "I think we have a server in East Slobovistan with our data..."

Then there's the issue of accuracy... Be prepared to test the data identification capabilities you've enabled. The last thing you want is to wade through a boatload of false-positive alerts every morning because of a paranoid signature set. You also want to make sure that critical information isn't flying right past your DLP scanners because of a lax signature set.

False positives? Signature sets? What is this, dead technology? That's right. Let's say your DLP product runs passively in alert-only mode. How do you know if you can trust it? That might require access to the original data or action to evaluate how and why the DLP product came to the alert-worthy conclusion that it did.

Paradoxically, if the DLP product is in active blocking mode, your analysts have an easier time separating true problems from false problems. If active DLP blocks something important, the user is likely to complain to the help desk. At least you can figure out what the user did that upset both DLP and the denied user.

However, as with intrusion-detection systems, not all actions can be automated, and network-based DLP will generate events that must be investigated and adjudicated by humans. The more aggressively you set your protection parameters, the more time administrators will spend reviewing events to decide which communications can proceed and which should be blocked.

Ah, we see the dead technology -- IDS -- mentioned explicitly. Let's face it -- running any passive alerting technology, and making good sense of the output, requires giving the analyst enough data to make a decision. This is the core of NSM philosophy, and why NSM advocates collecting a wide variety of data to support analysis.