1. SQL Injection Attacks and Defense by Justin Clarke, et al; Syngress. This was a really tough call. Any of the top 4 books could easily have been the best book I read in 2009. Congratulations to Syngress for publishing another winner. SQL injection is probably the number one problem for any server-side application, and this book is unequaled in its coverage.

Looking at the publisher count, top honors in 2009 go to Syngress for 2 titles, followed by Wiley, Cisco Press, O'Reilly, and devGuide.net, each with one.

Thank you to all publishers who sent me books in 2009. I have plenty more to read in 2010.

Congratulations to all the authors who wrote great books in 2009, and who are publishing titles in 2010!

Wednesday, December 30, 2009

Matt Olney and I spoke about the role of a Product Security Incident Response Team (PSIRT) at my SANS Incident Detection Summit this month. I asked if he would share his thoughts on how software vendors should handle vulnerability discovery in their software products.

I am really pleased to report that Matt wrote a thorough, public blog post titled Matt's Guide to Vendor Response. Every software vendor must read and heed this post. "Software vendor" includes any company that sells a product that runs software, whether it is a PC, mobile device, or a hardware platform executing firmware. Hmm, that includes just about everyone these days, except the little old ladies selling fabric at the hobby store.

Seriously, let's make 2010 the year of the PSIRT -- the year companies make dealing with vulnerabilities in their software an operational priority. I'm not talking about "building security in" -- that's been going on for a while. Until I can visit a variation of company.com/psirt, I'm not satisfied. For that matter, I'd like to see company.com/cirt as well, so outsiders can contact a company that might be inadvertently causing trouble for Internet users. (And yes, if you're wondering, we're working on both at my company!)

I am trying to get my company sponsorship for your class at Black Hat. However, I was ask to justify between your class and SANS 503, Intrusion Detection In-Depth.

Would you be able to provide some advice?

That's a good question, but it's easy enough to answer. The overall point to keep in mind is that TCP/IP Weapons School 2.0 is a new class, and when I create a new class I design it to be different from everything that's currently on the market. It doesn't make sense to me to teach the same topics, or use the same teaching techniques, found in classes already being offered. Therefore, when I first taught TWS2 at Black Hat DC last year, I made sure it was unlike anything provided by SANS or other trainers.

Beyond being unique, here are some specific points to consider. I'm sure I'll get some howls of protest from the SANS folks, but they have their own platform to justify their approach. The two classes are very different, each with a unique focus. It's up to the student to decide what sort of material he or she wants to learn, in what environment, using whatever methods he or she prefers. I don't see anything specifically "wrong" with the SANS approach, but I maintain that a student will learn skills more appropriate for their environment in my class.

TWS2 is a case-driven, hands-on, lab-centric class. SANS is largely a slide-driven class.

When you attend my class you get three handouts: 1) a workbook explaining how to analyze digital evidence; 2) a workbook with questions for 15 cases; and 3) a teacher's guide answering all of the questions for the 15 cases. There are no slides aside from a few housekeeping items and a diagram or two to explain how the class is set up.

When you attend SANS you will receive several sets of slide decks that the instructor will show during the course of the class. You will also have labs but they are not the focus of the class.

I designed TWS2 to meet the needs of a wide range of students, from beginners to advanced practitioners. TWS2 attendees typically finish 5-7 cases per class, with the remainder suitable for "homework." Students can work at their own pace, although we cover certain cases at checkpoints during the class. A few students have completed all 15 cases, and I often ask if those students are looking for a new opportunity with my team!

TWS2 is about investigating digital evidence, primarily in the form of network traffic, logs, and some memory captures. The focus is overwhelmingly on the content and not the container. SANS spends more time on the container and less on the content.

For example, if you look at the SANS course overview, you'll see they spend the first three days on TCP/IP headers and analysis with Tcpdump. Again, there's nothing wrong with that, but I don't care so much about what bit in the TCP header corresponds to the RST flag. That was mildly interesting in the late 1990s when that part of the SANS course was written, but the content of a network conversation has been more important this decade. Therefore, my class focuses on what is being said and less on how it was transmitted.

TWS2 is not about Snort. While students do have access to a fully-functional Sguil instance with Snort alerts, SANCP session data, and full content libpcap network traffic, I do not spend time explaining how to write Snort alerts. SANS spends at least one day talking about Snort.

TWS is not about SIM/SEM/SIEM. Any "correlation" between various forms of evidence takes place in the student's mind, or using the free Splunk instance containing the logs collected from each case. If you consider dumping evidence into a system like Splunk, and then querying that evidence, to be "correlation," then we have "correlation." (Please see Defining Security Event Correlation for my thoughts on that subject.) SANS spends two days on fairly simple open source options for "correlation" and "traffic analysis."

TWS cases cover a wide variety of activity, while SANS is narrowly focused on suspicious and malicious network traffic. I decided to write cases that cover many of the sorts of activities I expect an enterprise incident detector and responder to encounter during his or her professional duties.

I also do not dictate any single approach to investigating each case. Just like real life, I want the student to produce an answer. I care less about how he or she analyzed the data to produce that answer, as long as the chain of reasoning is sound and the student can justify and repeat his or her methodology.

I hope that helps prospective students make a choice. I'll note that I don't send any of my analysts to the SANS "intrusion detection" class. We provide in-house training that includes my material but also focuses on the sorts of decision-making and evidence sources we find to be most effective in my company. Also please note this post concentrated on the differences between my class and the SANS "intrusion detection" class, and does not apply to other SANS classes.

Registration is now open. Black Hat set five price points and deadlines for registration, but only these three are left.

Regular ends 15 Jan

Late ends 30 Jan

Onsite starts at the conference

Seats are filling -- it pays to register early!

If you review the Sample Lab I posted earlier this year, this class is all about developing an investigative mindset by hands-on analysis, using tools you can take back to your work. Furthermore, you can take the class materials back to work -- an 84 page investigation guide, a 25 page student workbook, and a 120 page teacher's guide, plus the DVD. I have been speaking with other trainers who are adopting this format after deciding they are also tired of the PowerPoint slide parade.

Feedback from my 2009 sessions was great. Two examples:

"Truly awesome -- Richard's class was packed full of content and presented in an understandable manner." (Comment from student, 28 Jul 09)

"In six years of attending Black Hat (seven courses taken) Richard was the best instructor." (Comment from student, 28 Jul 09)

If you've attended a TCP/IP Weapons School class before 2009, you are most welcome in the new one. Unless you attended my Black Hat training in 2009, you will not see any repeat material whatsoever in TWS2. Older TWS classes covered network traffic and attacks at various levels of the OSI model. TWS2 is more like a forensics class, with network, log, and related evidence.

I will also be teaching in Barcelona and Las Vegas, but I will announce those dates later.

I strongly recommend attending the Briefings on 2-3 Feb. Maybe it's just my interests, but I find the scheduled speaker list to be very compelling.

Friday, December 18, 2009

Taking another look at my notes, I found a bunch of quotes from speakers that I thought you might like to hear.

"If you think you're not using a MSSP, you already are. It's called anti-virus." Can anyone claim that, from the CIRTs and MSSPs panel?

Seth Hall said "Bro is a programming language with a -i switch to sniff traffic."

Seth Hall said "You're going to lose." Matt Olney agreed and expanded on that by saying "Hopefully you're going to lose in a way you recognize."

Matt Olney also said "Give your analyst a chance." ["All we are sayyy-ing..."]

Matt Jonkman said "Don't be afraid of blocking." It's not 2004 anymore. Matt emphasized the utility of reputation when triggering signatures, for example firing an alert when an Amazon.com-style URL request is sent to a non-Amazon.com server.

Ron Shaffer said "Bad guys are following the rules of your network to accomplish their mission."

Steve Sturges said "Snort 3.0 is a research project."

Gunter Ollmann said "Threats have a declining interest in persistence. Just exploit the browser and disappear when closed. Users are expected to repeat risky behavior, and become compromised again anyway."

All of the speakers made many interesting comments, but it was really only during the start of the second day, when Tony spoke, when I had time to write down some insights.

If you're not familiar with Tony, he is chief of the Vulnerability Analysis and Operations (VAO) Group in NSA.

These days, the US goes to war with its friends (i.e., allies fight with the us against a common adversary). However, the US doesn't know its friends until the day before the war, and not all of the US' friends like each other. These realities complicate information assurance.

Commanders have been trained to accept a certain level of error in physical space. They do not expect to know the exact number of bullets on hand before a battle, for example. However, they often expect to know exactly how many computers they have at hand, as well as their state. Commanders will need to develop a level of comfort with uncertainty.

Far too much information assurance is at the front line, where the burden rests with the least trained, least experienced, yet well-meaning, people. Think of the soldier fresh from tech school responsible for "making it work" in the field. Hence, Tony's emphasis on shifting the burden to vendors where possible.

"When nations compete, everybody cheats." [Note: this is another way to remember that with information assurance, the difference is the intelligent adversary.]

The bad guy's business model is more efficient than the good guy's business model. They are global, competitive, distributed, efficient, and agile. [My take on that is the financially-motivated computer criminals actually earn ROI from their activities because they are making money. Defenders are simply avoiding losses.

The best way to defeat the adversary is to increase his cost, level of uncertainty, and exposure. Introducing these, especially uncertainty, causes the adversary to stop, wait, and rethink his activity.

Defenders can't afford perfection, and the definition changes by the minute anyway. [This is another form of the Defender's Dilemma -- what should we try to save, and what should we sacrifice? On the other hand we have the Intruder's Dilemma, which Aaron Walters calls the Persistence Paradox -- how to accomplish a mission that changes a system while remaining undetected.]

Our problems are currently characterized by coordination and knowledge management, and less by technical issues.

Saturday, December 12, 2009

Keep your eyes open for the latest printed BSD Magazine, with my article Keeping FreeBSD Up-To-Date: OS Essentials. This article is something like 18 pages long, because at the last minute the publishers had several authors withdraw articles. The publishers decided to print the extended version of my article, so it's far longer than I expected! We're currently editing the companion piece on keeping FreeBSD applications up-to-date. I expect to also submit an article on running Sguil on FreeBSD 8.0 when I get a chance to test the latest version in my lab.

We had a great SANS WhatWorks in Incident Detection Summit 2009 this week! About 100 people attended. I'd like to thank those who joined the event as attendees; those who participated as keynotes (great work Ron Gula and Tony Sager), guest moderators (Rocky DeStefano, Mike Cloppert, and Stephen Windsor), speakers, and panelists; Debbie Grewe and Carol Calhoun from SANS for their excellent logistics and planning, along with our facilitators, sound crew, and staff; our sponsors, Allen Corp., McAfee, NetWitness, and Splunk; and also Alan Paller for creating the two-day "WhatWorks" format.

I appreciate the feedback from everyone who spoke to me. It sounds like the mix of speakers and panels was a hit. I borrowed this format from Rob Lee and his Incident Repsonse and Computer Forensics summits, so I am glad people liked it. I think the sweet spot for the number of panelists might be 4 or 5, depending on the topic. If it's more theoretical, with a greater chance of audience questions, a smaller number is better. If it's more of a "share what you know," like the tools and techniques panel, then a bigger number is ok.

Probably the best news from the Summit was the fact that SANS already scheduled the second edition -- the SANS WhatWorks in Incident Detection Summit 2010, 8-9 December 2010 in DC. I still need to talk to SANS about how it will work. They've asked me to combine log management with incident detection. I think that is interesting, since I included content on logs in this year's incident detection event. I'd like to preserve the single-track nature of the Summit, but it might be useful to have a few break-outs for people who want to concentrate on a single technology or technique.

I appreciate the blog coverage from Tyler Hudak and Matt Olney so far. Please let me know what you thought of the last event, and if you have any requests for the next one.

The very next training event for me is my TCP/IP Weapons School 2.0 at Black Hat in DC, 31 Jan - 1 Feb. Regular registration ends 15 January, so sign up while there are still seats left! This class tends to sell out due to the number of defense industry participants in the National Capitol Region.

Sunday, December 06, 2009

My main personal workstation is a Thinkpad x60s. As I wrote in Triple-Boot Thinkpad x60s, I have Windows XP, Ubuntu Linux, and FreeBSD installed. However, I rarely use the FreeBSD side. I haven't run FreeBSD on the desktop for several years, but I like to keep FreeBSD on the laptop in case I encounter a situation on the road where I know how to solve a problem with FreeBSD but not Windows or Linux. (Yes I know about [insert favorite VM product here]. I use them. Sometimes there is no substitute for a bare-metal OS.)

When I first installed FreeBSD on the x60s (named "neely" here), the wireless NIC, an Intel(R) PRO/Wireless 3945ABG, was not supported on FreeBSD 6.2. So, I used a wireless bridge. That's how the situation stayed until I recently read M.C. Widerkrantz's FreeBSD 7.2 on the Lenovo Thinkpad X60s. It looked easy enough to get the wireless NIC running now that it was supported by the wpi driver. I had used freebsd-update to upgrade the 6.2 to 7.0, then 7.0 to 7.1, and finally 7.1 to 7.2. This is where the apparent madness began.

I couldn't find the if_wpi.ko or wpifw.ko kernel modules in /boot/kernel. However, on another system (named "r200a") which I believe had started life as a FreeBSD 7.0 box (but now also ran 7.2), I found both missing kernel modules. Taking a closer look, I simply counted the number of files on my laptop /boot/kernel and compared that list to the number of files on the other FreeBSD 7.2 system.

I think I will try upgrading the 7.2 system to 8.0 using freebsd-update, then compare the results to a third system that started life as 7.0, then upgraded from 7.2 to 8.0. If the /boot/kernel directories are still different, I might reinstall 8.0 on the laptop from media or the network.

Thursday, December 03, 2009

I know many of us work in large, diverse organizations. The larger or more complex the organization, the more difficult it is to enforce uniform security countermeasures. The larger the population to be "secure," the more likely exceptions will bloom. Any standard tends to devolve to the least common denominator. There are some exceptions, such as FDCC, but I do not know how widespread that standard configuration is inside the government.

Beyond the difficulty of applying a uniform, worthwhile standard, we run into the diversity vs monoculture argument from 2005. I tend to side with the diversity point of view, because diversity tends to increase the cost borne by an intruder. In other words, it's cheaper to develop exploitation methods for a target who 1) has broadly similar, if not identical, systems and 2) publishes that standard so the intruder can test attacks prior to "game day."

At the end of the day, the focus on uniform standards is a manifestation of the battle between two schools of thought: Control-Compliant vs Field-Assessed Security. The control-compliant team believes that developing the "best standard," and then applying that standard everywhere, is the most important aspect of security. The field-assessed team (where I devote my effort) believes the result is more important than how you get there.

I am not opposed to developing standards, but I do think that the control-compliant school of thought is only half the battle -- and that controls occupy far more time and effort than they are worth. If the standard whithers in the face of battle, i.e., once field-assessed it is found to be lacking, then the standard is a failure. Compliance with a failed standard is worthless at that point.

However, I'd like to propose a variation of my original argument. What if you abandon uniform standards completely? What if you make the focus of the activity field-assessed instead of control-compliant, by conducting assessments of systems? In other words, let a hundred flowers blossom.

(If you don't appreciate the irony, do a little research and remember the sorts of threats that occupy much of the time of many this blog's readers!)

So what do I mean? Rather than making compliance with controls the focus of security activity, make assessment of the results the priority. Conduct blue and red team assessments of information assets to determine if they meet various resistance and (maybe) "survivability" metrics. In other words, we won't care how you manage to keep an intruder from exploiting your system, as long as it takes longer for a blue or red assesor with time X and skill level Y and initial access level Z (or something to that effect).

In such a world, there's plenty of room for the person who wants to run Plan 9 without anti-virus, the person who runs FreeBSD with no graphical display or Web browser, the person who runs another "nonstandard" platform or system -- as long as their system defies the field assessment conducted by the blue and red teams. (Please note the one "standard" I would apply to all assets is that they 1) do no harm to other assets and 2) do not break any laws by running illegal or unlicensed software.)

If a "hundred flowers" is too radical, maybe consider 10. Too tough to manage all that? Guess what -- you are likely managing it already. So-called "unmanaged" assets are everywhere. You probably already have 1000 variations, never mind 100. Maybe it's time to make the system's inability to survive against blue and red teams the measure of failure, not whether the system is "compliant" with a standard, the measure of failure?

Now, I'm sure there is likely to be a high degree of correlation between "unmanaged" and vulnerable in many organizations. There's probably also a moderate degree of correlation between "exceptional" (as in, this box is too "special" to be considered "managed") and vulnerable. In other instances, the exceptional systems may be impervious to all but the most dedicated intruders. In any case, accepting that diversity is a fact of life on modern networks, and deciding to test the resistance level of those assets, might be more productive than seeking to develop and apply uniform standards.

Monday, November 30, 2009

Apparently there's been a wave of house burglaries in a nearby town during the last month. As you might expect, local residents responded by replacing windows with steel panels, front doors with vault entrances, floors with pressure-sensitive plates, and whatever else "security vendors" recommended. Town policymakers created new laws to mandate locking doors, enabling alarm systems, and creating scorecards for compliance. Home builders decided they needed to adopt "secure building" practices so all these retrofitted measures were "built in" future homes.

Oh wait, this is the real world! All those vulnerability-centric measures I just described are what too many "security professionals" would recommend. Instead, police identified the criminals and arrested them. From Teen burglary ring in Manassas identified:

Two suspects questioned Friday gave information about the others, police said.

Now this crew is facing prosecution. That's a good example of what we need to do in the digital world: enable and perform threat-centric security. We won't get there until we have better attribution, and interestingly enough attribution is the word I hear most often from people pondering improvements in network security.

Friday, November 27, 2009

With the announcement of FreeBSD 8.0, it seems like a good time to donate to the FreeBSD Foundation, a US 501(c)3 charity. The Foundation funds and manages projects, sponsors FreeBSD events, Developer Summits and provides travel grants to FreeBSD developers. It also provides and helps maintain computers and equipment that support FreeBSD development and improvements.

I just uploaded a video that some readers might find entertaining. This video shows the United States Air Force Computer Emergency Response Team (AFCERT) in 2000. Kelly AFB, Security Hill, and Air Intelligence Agency appear. The colonel who leads the camera crew into room 215 is James Massaro, then commander of the Air Force Information Warfare Center. The old Web-based interface to the Automated Security Incident Measurement (ASIM) sensor is shown, along with a demo of the "TCP reset" capability to terminate TCP-based sessions.

We have a classic quote about a "digital Pearl Harbor" from Winn Schwartau, "the nation's top information security analyst." Hilarious, although Winn nails the attribution and national leadership problems; note also the references to terrorists in this pre-9/11 video. "Stop the technology madness!" Incidentally, if the programs shown were "highly classified," they wouldn't be in this video!

I was traveling for the AFCERT when this video was shot, so luckily I am not seen anywhere...

In Stansbie v Troman [1948] 2 All ER 48 the claimant, a householder, employed the defendant, a painter. The claimant had to be absent from his house for a while and he left the defendant working there alone. Later, the defendant went out for two hours leaving the front door unlocked. He had been warned by the claimant to lock the door whenever he left the house.

While the house was empty someone entered it by the unlocked front door and stole some of the claimant's posessions. The defendant was held liable for the claimant's loss for, although the criminal action of a third party was involved, the possibility of theft from an unlocked house was one which should have occurred to the defendant.

So, the painter was liable. However, that doesn't let the thief off the hook. If the police find the thief, they will still arrest, prosecute, and incarcerate him. The painter won't serve part of the thief's jail time, even though the painter was held liable in this case. So, even in the best case scenario for those claiming "negligence" for vulnerable systems, it doesn't diminish the intruder's role in the crime.

Amazon.com just posted my three star review of Martin Libicki's Cyberdeterrence and Cyberwar. I've reproduced the review in its entirety here because I believe it is important to spread the word to any policy maker who might read this blog or be directed here. I've emphasized a few points for readability.

As background, I am a former Air Force captain who led the intrusion detection operation in the AFCERT before applying those same skills to private industry, the government, and other sectors. I am currently responsible for detection and response at a Fortune 5 company and I train others with hands-on labs as a Black Hat instructor. I also earned a master's degree in public policy from Harvard after graduating from the Air Force Academy.

Martin Libicki's Cyberdeterrence and Cyberwar (CAC) is a weighty discussion of the policy considerations of digital defense and attack. He is clearly conversant in non-cyber national security history and policy, and that knowledge is likely to benefit readers unfamiliar with Cold War era concepts. Unfortunately, Libicki's lack of operational security experience undermines his argument and conclusions. The danger for Air Force leaders and those interested in policy is that they will not recognize that, in many cases, Libicki does not understand what he is discussing. I will apply lessons from direct experience with digital security to argue that Libicki's framing of the "cyberdeterrence" problem is misguided at best and dangerous at worst.

Libicki's argument suffers five key flaws. First, in the Summary Libicki states "cyberattacks are possible only because systems have flaws" (p xiii). He continues with "there is, in the end, no forced entry in cyberspace... It is only a modest exaggeration to say that organizations are vulnerable to cyberattack only to the extent they want to be. In no other domain of warfare can such a statement be made" (p. xiv). I suppose, then, that there is "no forced entry" when a soldier destroys a door with a rocket, because the owners of the building are vulnerable "to the extent they want to be"? Are aircraft carriers similarly vulnerable to hypersonic cruise missiles because "they want to be"? How about the human body vs bullets?

Second, Libicki's fatal understanding of digital vulnerability is compounded by his ignorance of the role of vendors and service providers in the security equation. Asset owners can do everything in their power to defend their resources, but if an application or implementation has a flaw it's likely only the vendor or service provider who can fix it. Libicki frequently refers to sys admins as if they have mystical powers to completely understand and protect their environments. In reality, sys admins are generally concerned about availability alone, since they are often outsourced to the lowest bidder and contract-focused, or understaffed to do anything more than keep the lights on.

Third, this "blame the victim" mentality is compounded by the completely misguided notions that defense is easy and recovery from intrusion is simple. On p 144 he says "much of what militaries can do to minimize damage from a cyberattack can be done in days or weeks and with few resources." On p 134 he says that, following cyberattack, "systems can be set straight painlessly." Libicki has clearly never worked in a security or IT shop at any level. He also doesn't appreciate how much the military relies on civilian infrastructure from everything to logistics to basic needs like electricity. For example, on p 160 he says "Militaries generally do not have customers; thus, their systems have little need to be connected to the public to accomplish core functions (even if external connections are important in ways not always appreciated)." That is plainly wrong when one realizes that "the public" includes contractors who design, build, and run key military capabilities.

Fourth, he makes a false distinction between "core" and "peripheral" systems, with the former controlled by users and the later by sys admins. He says "it is hard to compromise the core in the same precise way twice, but the periphery is always at risk" (p 20). Libicki is apparently unaware that one core Internet resource, BGP, is basically at constant risk of complete disruption. Other core resources, DNS and SSL, have been incredibly abused during the last few years. All of these are known problems that are repeatedly exploited, despite knowledge of their weaknesses. Furthermore, Libicki doesn't realize that so-called critical systems are often more fragile that user systems. In the real world, critical systems often lack change management windows, or are heavily regulated, or are simply old and not well maintained. What's easier to reconfigure, patch, or replace, a "core" system that absolutely cannot be disrupted "for business needs," or a "peripheral" system that belongs to a desk worker?

Fifth, in addition to not understanding defense, Libicki doesn't understand offense. He has no idea how intruders think or the skills they bring to the arena. On pp 35-6 he says "If sufficient expenditures are made and pains are taken to secure critical networks (e.g., making it impossible to alter operating parameters of electric distribution networks from the outside), not even the most clever hacker could break into such a system. Such a development is not impossible." Yes, it is impossible. Thirty years of computer security history have shown it to be impossible. One reason why he doesn't understand intruders appears on p 47 where he says "private hackers are more likely to use techniques that have been circulating throughout the hacker community. While it is not impossible that they have managed to generate a novel exploit to take advantage of a hitherto unknown vulnerability, they are unlikely to have more than one." This baffling statement shows Libicki doesn't appreciate the skill set of the underground.

Libicki concludes on pp xiv and xix-xx "Operational cyberwar has an important niche role, but only that... The United States and, by extension, the U.S. Air Force, should not make strategic cyberwar a priority investment area... cyberdefense remains the Air Force's most important activity within cyberspace." He also claims it is not possible to "disarm" cyberwarriors, e.g., on p 119 "one objective that cyberwar cannot have is to disarm, much less destroy, the enemy. In the absence of physical combat, cyberwar cannot lead to the occupation of territory." This focus on defense and avoiding offense is dangerous. It may not be possible to disable a country's potential for cyberwar, but an adversary can certainly target, disrupt, and even destroy cyberwarriors. Elite cyberwarriors could be likened to nuclear scientists in this respect; take out the scientists and the whole program suffers.

Furthermore, by avoiding offense, Libicki makes a critical mistake: if cyberwar has only a "niche role," how is a state supposed to protect itself from cyberwar? In Libicki's world, defense is cheap and easy. In the real world, the best defense is 1) informed by offense, and 2) coordinated with offensive actions to target and disrupt adversary offensive activity. Libicki also focuses far too much on cyberwar in isolation, while real-world cyberwar has historically accompanied kinetic actions.

Of course, like any good consultant, Libicki leaves himself an out on p 177 by stating "cyberweapons come relatively cheap. Because a devastating cyberattack may facilitate or amplify physical operations and because an operational cyberwar capability is relatively inexpensive (especially if the Air Force can leverage investments in CNE), an offensive cyberwar capability is worth developing." The danger of this misguided tract is that policy makers will be swayed by Libicki's misinformed assumptions, arguments, and conclusions, and believe that defense alone is a sufficient focus for 21st century digital security. In reality, a kinetically weaker opponent can leverage a cyber attack to weaken a kinetically superior yet net-centric adversary. History shows, in all theatres, that defense does not win wars, and that the best defense is a good offense.

If you haven't seen Shodan yet, you're probably not using Twitter as a means to stay current on security issues. Shoot, I don't even follow anyone and I heard about it.

Basically a programmer named John Matherly scanned a huge swath of the Internet for certain TCP ports (80, 21, 23 at least) and published the results in a database with a nice Web front-end. This means you can put your mind in Google hacking mode, find vulnerable platforms, maybe add in some default passwords (or not), and take over someone's system. We're several steps along the Intrusion as a Service (IaaS) path already!

Incidentally, this idea is not new. I know at least one company that sold a service like this in 2004. The difference is that Shodan is free and open to the public.

Shodan is a dream for those wanting to spend Thanksgiving looking for vulnerable boxes, and a nightmare for their owners. I would not be surprised if shodan.surtri.com disappears in the next few days after receiving a call or two from TLAs or LEAs or .mil's. I predict a mad scramble by intruders during the next 24-48 hours as they use Shodan to locate, own, and secure boxes before others do.

Matt Franz asked good questions about this site in his post Where's the Controversy about Shodan? Personally I think Shodan will disappear. Many will argue that publishing information about systems is not a problem. We hear similar arguments from people defending sites that publish torrents. Personally I don't have a problem with Shodan or torrent sites. From a personal responsibility issue it would have been nice to delay notification of Shodan until after Thanksgiving.

Tuesday, November 24, 2009

A hacker who posted a fake message on the Web site of China's famous Shaolin Temple repenting for its commercial activities was just making a mean joke, the temple's abbot was cited as saying by Chinese state media Monday.

That and previous attacks on the Web site were spoofs making fun of the temple, Buddhism and the abbot himself, Shi Yongxin was cited as telling the People's Daily.

"We all know Shaolin Temple has kung fu," Shi was quoted as saying. "Now there is kung fu on the Internet too, we were hacked three times in a row."

Why am I not surprised that a Shaolin monk has a better grasp of the fundamentals of computer security than some people in IT?

The National Institute for Standards and Technology is urging the government to continuously monitor its own cybersecurity efforts.

As soon as I read that, I knew that NIST's definition of "monitor" and the article's definition of "monitor" did not mean the real sort of monitoring, threat monitoring, that would make a difference against modern adversaries.

The article continues:

Special Publication 800-37 fleshes out six steps federal agencies should take to tackle cybersecurity: categorization, selection of controls, implementation, assessment, authorization, and continuous monitoring...

Finally, and perhaps most significantly, the document advises federal agencies to put continuous monitoring in place. Software, firmware, hardware, operations, and threats change constantly. Within that flux, security needs to be managed in a structured way, Ross says.

"We need to recognize that we work in a very dynamic operational environment," Ross says. "That allows us to have an ongoing and continuing acceptance and understanding of risk, and that ongoing determination may change our thinking on whether current controls are sufficient."

The continuous risk management step might include use of automated configuration scanning tools, vulnerability scanning, and intrusion detection systems, as well as putting in place processes to monitor and update security guidance and assessments of system security requirements.

Note that the preceding text mentions "intrusion detection systems," but the rest of the text has nothing to do with real monitoring, i.e., detecting and responding to intrusions. I'm not just talking about network-centric approaches, by the way -- infrastructure, host, log, and other sources are all real monitoring, but this is not what NIST means by "monitoring."

A critical aspect of managing risk from information systems involves the continuous monitoring of the security controls employed within or inherited by the system.65

[65 A continuous monitoring program within an organization involves a different set of activities than Security Incident Monitoring or Security Event Monitoring programs.]

So, it sounds like activities that involve actually watching systems are not within scope for "continuous monitoring."

Conducting a thorough point-in-time assessment of the deployed security controls is a necessary but not sufficient condition to demonstrate security due diligence. An effective organizational information security program also includes a rigorous continuous monitoring program integrated into the system development life cycle. The objective of the continuous monitoring program is to determine if the set of deployed security controls continue to be effective over time in light of the inevitable changes that occur.

That sounds ok so far. I like the idea of evaluations to determine if controls are effective over time. In the next section below we get to the heart of the problem, and why I wrote this post.

Ok, where is threat monitoring? I see configuration management, "control processes," reporting status to "officials," "active involvement by authorizing officials," and so on.

The next section tells me what NIST really considers to be "monitoring":

Priority for security control monitoring is given to the controls that have the reatest volatility and the controls that have been identified in the organization’s plan of action and milestones...

[S]ecurity policies and procedures in a particular organization may not be likely to change from one year to the next...

Security controls identified in the plan of action and milestones are also a priority in the continuous monitoring process, due to the fact that these controls have been deemed to be ineffective to some degree.

Organizations also consider specific threat information including known attack vectors (i.e., specific vulnerabilities exploited by threat sources) when selecting the set of security controls to monitor and the frequency of such monitoring...

Have you broken the code yet? Security control monitoring is a compliance activity. Granted, this is an improvement from the typical certification and accreditation debacle, where "security" is assessed via paperwork exercises every three years. Instead, .gov compliance teams will perform so-called "continuous monitoring," meaning more regular checks to see if systems are in compliance.

Is this really an improvement?

I don't think so. NIST is missing the point. Their approach advocates Control-compliant security, not field-assessed security. Their "scoreboard" is the result of a compliance audit, not the number of systems under adversary control or the amount of data exfiltrated or degraded by the adversary.

I don't care how well your defensive "controls" are informed by offense. If you don't have a Computer Incident Response Team performing continuous threat monitoring for detection and response, you don't know if your controls are working. The NIST document has a few hints about the right approach, at best, but the majority of the so-called "monitoring" guidance is another compliance activity.

Saturday, November 21, 2009

One of the presentations I delivered at the Information Security Summit last month discussed Network Security Monitoring. The Security Justice guys recorded audio of the presentation and posted it here as Network Security Monitoring and Incident Response. The audio file is InfoSec2009_RichardBejtlich.mp3.

So is SEC anything else? Based on some operational uses I have seen, I think I can safely introduce an extension to "true" SEC: applying information from one or more data sources to develop context for another data source. What does that mean?

One example I saw recently (and this is not particularly new, but it's definitely useful), involves NetWitness 9.0. Their new NetWitness Identity function adds user names collected from Active Directory to the meta data available while investigating network traffic. Analysts can choose to review sessions based on user names rather than just using source IP addresses.

This is certainly not an "if-then" proposition, as sold by SIM vendors, but the value of this approach is clear. I hope my use of the word "context" doesn't apply to much historical security baggage to this conversation. I'm not talking about making IDS alerts more useful by knowing the qualities of a target of server-side attack, for example. Rather, to take the case of a server side attack scenario, imagine replacing the source IP with the country "Bulgaria" and the target IP with "Web server hosting Application X" or similar. It's a different way for an analyst to think about an investigation.

Tuesday, November 10, 2009

I found the new 60 Minutes update on information warfare to be interesting. I fear that the debate over whether or not "hackers" disabled Brazil's electrical grid will overshadow the real issue presented in the story: advanced persistent threats are here, have been here, and will continue to be here.

Some critics claim APT must be a bogey man invented by agencies arguing over how to gain greater control over the citizenry. Let's accept agencies are arguing over turf. That doesn't mean the threat is not real. If you refuse to accept the threat exists, you're simply ignorant of the facts. That might not be your fault, given policymakers' relative unwillingness to speak out.

Saturday, November 07, 2009

I had the distinct privilege to attend a keynote by retired Air Force General Michael Hayden, most recently CIA director and previously NSA director. NetWitness brought Gen Hayden to its user conference this week, so I was really pleased to attend that event. I worked for Gen Hayden when he was commander of Air Intelligence Agency in the 1990s; I served in the information warfare planning division at that time.

Gen Hayden offered the audience four main points in his talk.

"Cyber" is difficult to understand, so be charitable with those who don't understand it, as well as those who claim "expertise." Cyber is a domain like other warfighting domains (land, sea, air, space), but it also possesses unique characteristics. Cyber is man-made, and operators can alter its geography -- even potentially to destroy it. Also, cyber conflicts are more likely to affect other domains, whereas it is theoretically possible to fight an "all-air" battle, or an "all-sea" battle.

Gen Hayden compared the rush to develop and deploy technology to consumers and organizations to the land rushes of the late 1890s. When "ease of use," "security," and "privacy" are weighed against each other, ease of use has traditionally dominated.

Gen Hayden asked what private organizations in the US maintain their own ballistic missile defense systems. None of course -- meaning, why do we expect the private sector to defend itself against cyber threats, on a "point" basis?

Cyber is difficult to discuss. No one wants to talk about it, especially at the national level. The agency with the most capability to defend the nation suffers because it is both secret and powerful, two characteristics it needs to be effective. The public and policymakers (rightfully) distrust secret and powerful organizations.

Think like intelligence officers. I should have expected this, coming from the most distinguished intelligence officer of our age. Gen Hayden says the first question he asks when visiting private companies to consult on cyber issues is: who is your intelligence officer?

Gen Hayden offered advice for those with an intelligence mindset who provide advice to policymakers. He said intel officers are traditional inductive thinkers, starting with indicators and developing facts, from which they derive general theories. Intel officers are often pessimistic and realistic because they deal with operational realities, "as the world is."

Policymakers, on the other hand, are often deductive thinkers, starting with a "vison," with facts at the other end of their thinking. "No one elects a politician for their command of the facts. We elect politicians who have a vision of where we should be, not where we are." Policymakers are often optimistic and idealistic, looking at their end goal, "as the would should be."

When these two world views meet, say when the intel officer briefs the policymaker, the result can be jarring. It's up to the intel officer to figure out how to present findings in a way that the policymaker can relate to the facts.

After the prepared remarks I asked Gen Hayden what he thought of threat-centric defenses. He said it is not outside the realm of possibility to support giving private organizations the right to more aggressively defend themselves. Private forces already perform guard duties; police forces don't carry the whole burden for preventing crime, for example.

Gen Hayden also discussed the developments which led from military use of air power to a separate Air Force in 1947. He said "no one in cyber has sunk the Ostfriesland yet," which was a great analogy. He also says there are no intellectual equivalents to Herman Kahn or Paul Nitze in the cyber thought landscape.

Friday, October 30, 2009

Ken Bradley and I will conduct a Webcast for SANS on Monday 2 Nov at 1 pm EST. Check out the sign-up page. I've reproduced the introduction here.

Every day, intruders find ways to compromise enterprise assets around the world. To counter these attackers, professional incident detectors apply a variety of host, network, and other mechanisms to identify intrusions and respond as quickly as efficiently as possible.

In this Webcast, Richard Bejtlich, Director of Incident Response for General Electric, and Ken Bradley, Information Security Incident Handler for the General Electric Computer Incident Response Team, will discuss professional incident detection. Richard will interview Ken to explore his thoughts on topics like the following:

How does one become a professional incident detector?

What are the differences between working as a consultant or as a member of a company CIRT?

How have the incident detection and response processes changed over the last decade?

What challenges make it difficult to identify intruders, and how can security staff overcome these obstacles?

I will lead this event and conduct it more like a podcast, so the audio will be the important part. This is a short-notice event, but it will be cool. Please join us. Thank you!

Military leaders and analysts say evolving cyber threats will require the Defense Department to work more closely with experts in industry...

Indeed, the Pentagon must ultimately change its culture, say independent analysts and military personnel alike. It must create a collaborative environment in which military, civilian government and, yes, even the commercial players can work together to determine and shape a battle plan against cyber threats...

“Government may be a late adopter, but we should be exploiting its procurement power,” said Melissa Hathaway, former acting senior director for cyberspace for the Obama administration, at the ArcSight conference in Washington last month...

Hmm, "procurement power." This indicates to me that technology is the answer?

Although one analyst praised the efforts to make organizational changes at DOD, he also stressed the need to give industry more freedom. “The real issue is a lack of preparedness and defensive posture at DOD,” said Richard Stiennon, chief research analyst at independent research firm IT-Harvest and author of the forthcoming book "Surviving Cyber War."

“Private industry figured this all out 10 years ago,” he added. “We could have a rock-solid defense in place if we could quickly acquisition through industry. Industry doesn’t need government help — government should be partnering with industry.”

Hold on. "Private industry figured this all out?" Is this the same private industry in which my colleagues and I work? And there's that "acquisition" word again. Why do I get the feeling that technology is supposed to be the answer here?

Industry insiders say they are ready to meet the challenge and have the resources to attract the top-notch talent that agencies often cannot afford to hire.

That's probably true. Government civilian salaries cannot match the private sector, and military pay is even worse, sadly.

Industry vendors also have the advantage of not working under the political and legal constraints faced by military and civilian agencies. They can develop technology as needed rather than in response to congressional or regulatory requirements or limitations.

I don't understand the point of that statement. Where do military and civilian agencies go to get equipment to create networks? Private industry. Except for certain classified scenarios, the Feds and military run the same gear as everyone else.

“This is a complicated threat with a lot of money at stake,” said Steve Hawkins, vice president of information security solutions at Raytheon. “Policies always take longer than technology. We have these large volumes of data, and contractors and private industry can act within milliseconds.”

Ha ha. Sure, "contractors and private industry can act within milliseconds" to scoop up "a lot of money" if they can convince decision makers that procurement and acquisition of technology are the answer!

Let's get to the bottom line. Partnerships and procurement are not the answer to this problem. Risk assessments, return on security investment, and compliance are not the answer to this problem.

Leadership is the answer.

Somewhere, a CEO of a private company, or an agency chief, or a military commander has to stand up and say:

I am tired of the adversary having its way with my organization. What must we do to beat these guys?

This is not a foreign concept. I know organizations that have experienced this miracle. I have seen IT departments aligned under security because the threat to the organization was considered existential. Leaders, talk to your security departments directly. Listen to them. They are likely to already know what needs to be done, or are desperate for resources to determine the scope of the problem and workable solutions.

Remember, leaders need to say "we're not going to take it anymore."

That's step one. Leaders who internalize this fight have a chance to win it. I was once told the most effective cyber defenders are those who take personal affront to having intruders inside their enterprise. If your leader doesn't agree, those defenders have a lonely battle ahead.

Step two is to determine what tough choices have to be made to alter business practices with security in mind. Step three is for private sector leaders to visit their Congressional representatives in person and say they are tired of paying corporate income tax while receiving zero protection from foreign cyber invaders.

When enough private sector leaders are complaining to Congress, the Feds and military are going to get the support they need to make a difference in this cyber conflict. Until then, don't believe that partnerships and procurement will make any difference.

Tuesday, October 27, 2009

I'm a little late to this issue, but let me start by saying I read Craig Balding's RSA Europe 2009 Presentation this evening. In it he mentioned something called the A6 Working Group. I learned this is related to several blog posts and a Twitter discussion. In brief:

In June, Craig posted Stop the Madness! Cloud Onboarding Audits - An Open Question... where he wondered Is there an existing system/application/protocol whereby I can transmit my policy requirements to a provider, they can respond in real-time with compliance level and any additional costs, with less structured/known requirements responded to by a human (but transmitted the same way)?

Later in June, Craig posted in Vulnerability Scanning and Clouds: An Attempt to Move the Dialog On... where he spoke of the need for customers to conduct vulnerability assessments of cloud providers: A “ScanAuth” API call empowers the customer (or their nominated 3rd party) to scan their hosted Cloud infrastructure confident in the knowledge they won’t fall foul of the providers Terms of Service.

In July, Chris extended Craig's idea with Extending the Concept: A Security API for Cloud Stacks, building on the aforementioned Twitter discussions. Chris mentioned The Audit, Assertion, Assessment, and Assurance API (A6) (Title credited to @CSOAndy)... Specifically, let’s take the capabilities of something like SCAP and embed a standardized and open API layer into each IaaS, PaaS and SaaS offering (see the API blocks in the diagram below) to provide not only a standardized way of scanning for network vulnerabilities, but also configuration management, asset management, patch remediation, compliance, etc.

What cloud services users need is a way to verify that the security they expect is being delivered, and there is an effort underway for an interface that would do just that.

Called A6 (Audit, Assertion, Assessment and Assurance API) the proposal is still in the works, driven by two people: Chris Hoff - who came up with the idea and works for Cisco - and the author of the Iron Fog blog who identifies himself as Ben, an information security consultant in Toronto.

The usefulness of the API would be that cloud providers could offer customers a look into certain aspects of the service without compromising the security of other customers’ assets or the security of the cloud provider’s network itself.

So let's see what that says:The A6 API was designed with the following concepts in mind:

The security stack MUST provide external systems with the ability to query a utility computing provider for their security state.

Ok, that's pretty generic. We don't know what is meant by "security state," but we're just starting.

The stack MUST provide sufficient information for an evaluation of security state asserted by the provider. Same issue as #1.

The information exposed via public interfaces MUST NOT provide specific information about vulnerabilities or result in detailed security configurations being exposed to third parties or trusted customers. Hmm, I'm lost. I'm supposed to determine "security state" but without "specific information about vulnerabilities"?

The information exposed via public interfaces SHOULD NOT provide third parties or trusted customers with sufficient data as to infer the security state of a specific element within the providers environment. Same issue as #4.

In classic outsourcing deals these security policies and controls would be incorporated into the procurement contract; with cloud computing providers, the ability to enter in specific contractual obligations for security or allow for third party audits is either limited or non-existent. However, this limitation does not reduce the need for consuming organizations to protect their data.

The A6 API is intended to close this gap by providing consuming organizations with near real-time views into the security of their cloud computing provider. While this does not allow for consuming organizations to enforce their security policies and controls upon the provider, they will have information to allow them to assess their risk exposure.

Before I drop the question you're all waiting for, let me say that I think it is great people are thinking about these problems. Much better to have a discussion than to assume cloud = secure.

However, my question is this: how does this provide "consuming organizations with near real-time views into the security of their cloud computing provider"?

Here is what I think is happening. Craig started this thread because he wanted a way to conduct audit and compliance (remember I highlighted those terms) activities against cloud providers without violating their terms of service. I am sure Craig would agree that compliance != security.

The danger is that someone will believe that complaince = security, thinking one could conceivably determine security state by scanning for network vulnerabilities, but also configuration management, asset management, patch remediation, compliance, etc..

This is like network access control all over again. A good "security state" means you're allowed on the network because your system is configured "properly," the system is "patched," and so on. Never mind that the system is 0wned. Never mind that there is no API for quering 0wnage.

Don't get me wrong, this is a really difficult problem. It is exceptionally difficult to assess true system state by asking the system, since you are at the mercy of the intruder. It could be worse with cloud and virtual infrastructure if the intruder owns the system and the virtual infrastructure. Customer queries the A6 API and the cloud returns a healthy response, despite the reality. Shoot, the cloud could say it IS healthy by the definition of patches or configuration and still be 0wned.

I think there's more thought required here, but that doesn't mean A6 is a waste of time -- if we are clear that it's more about compliance and really nothing about security, or especially trustworthiness of the assets.

I wrote the following to provide more information on the Summit and explain its purpose.

All of us want to spend our limited information technology and security funds on the people, products, and processes that make a difference. Does it make sense to commit money to projects when we don’t know their impact? I’m not talking about fuzzy “return on investment” (ROI) calculations or fabricated “risk” ratings. Don’t we all want to know how to find intruders, right now, and then concentrate on improvements that will make it more difficult for bad guys to disclose, degrade, or deny our data?

To answer this question, I’ve teamed with SANS to organize a unique event -- the SANS WhatWorks in Incident Detection Summit 2009, on 9-10 December 2009 in Washington, DC. My goal for this two-day, vendor-neutral, practitioner-focused Summit is to provide security operators with real-life guidance on how to discover intruders in the enterprise. This isn’t a conference on a specific commercial tool, or a series of death-by-slide presentations, or lectures by people disconnected from reality. I’ve reached out to the people I know on the front lines, who find intruders on a regular, daily basis. If you don’t think good guys know how to find bad guys, spend two days with people who go toe-to-toe with the worst intruders on the planet.

We’ll discuss topics like the following:

How do Computer Incident Response Teams and Managed Security Service Providers detect intrusions?

What network-centric and host-centric indicators yield the best results, and how do you collect and analyze them?

What open source tools are the best-kept secrets in the security community, and how can you put them to work immediately in your organization?

I have to agree with the other 3-star reviews of Hacking Exposed: Web 2.0 (HEW2). This book just does not stand up to the competition, such as The Web Application Hacker's Handbook (TWAHH) or Web Security Testing Cook (WSTC). I knew this book was in trouble when I was already reading snippets mentioning JavaScript arrays in the introduction. That set the tone for the book: compressed, probably rushed, mixing material of differing levels of difficulty. For example, p 8 mentions using prepared statements as a defense against SQL injection. However, only a paragraph on the topic appears, with no code samples (unlike TWAHH).

I just wrote five star reviews of The Web Application Hacker's Handbook (TWAHH) and SQL Injection Attacks and Defense (SIAAD). Is there really a need for another Web security book like Web Security Testing Cookbook (WSTC)? The answer is an emphatic yes. While TWAHH and SIAAD include offensive and defensive material helpful for developers, those books are more or less aimed at assessment professionals. WSTC, on the other hand, is directed squarely at Web developers. In fact, WSTC is specifically written for those who incorporate unit testing into their software development lifecycle. I believe anyone developing Web applications would benefit from reading WSTC.

I just finished reviewing The Web Application Hacker's Handbook, calling it a "Serious candidate for Best Book Bejtlich Read2009." SQL Injection Attacks and Defense (SIAAD) is another serious contender for BBBR09. In fact, I recommend reading TWAHH first because it is a more comprehensive overview of Web application security. Next, read SIAAD as the definitive treatise on SQL injection. Syngress does not have a good track record when it comes to books with multiple authors -- SIAAD has ten! -- but SIAAD is clearly a winner.

The Web Application Hacker's Handbook (TWAHH) is an excellent book. I read several books on Web application security recently, and this is my favorite. The text is very well-written, clear, and thorough. While the book is not suitable for beginners, it is accessible and easy to read for those even without Web development or assessment experience.

Her post describes how she and Alex Tereshkin implemented a physical attack against laptops with TrueCrypt full disk encryption. They implemented the attack (called "Evil Maid") as a bootable USB image that an intruder would use to boot a target laptop. Evil Maid hooks the TrueCrypt function that asks the user for a passphrase on boot, then stores the passphrase for later physical retrieval.

Joanna recommends implementing a product that supports Trusted Platform Module (TPM), like Microsoft BitLocker. A detection-oriented workaround is to calculate hashes of selected disk sectors and partitions and decide that mismatches indicate an intrusion has occurred. That approach still misses BIOS-based attacks but it's the best one can do without TPM support.