Thursday, May 31, 2007

Today I spoke at the ISS World Spring 2007 conference in Alexandria, VA. ISS stands for Intelligence Support Systems. The speakers, attendees, and vendors are part of or support legal and government agencies that perform Lawful Intercept (LI) and associated monitoring activities. Many attendees appeared to be from county, state, and federal law enforcement agencies (LEAs). Others were wired and wireless service providers who are responsible for fulfilling LI requests.

This was a very different crowd. Even when cops attend security conferences (like Fed, I mean Black, Hat) the vibe is different. At security cons it's seen to be cool if one has mad offensive sk1llz. This group was all about acquiring the information needed to go to court to convict bad guys.

One theme immediately grabbed my attention, and it's going to eventually affect every entity that provides technological services:

Today (and previously), if I wanted to perform surveillance against a target, I would tap his phone line. In the very old days I would physically attach to phone lines, but these days I work with the telephone company to obtain electronic access. The telcom is a service provider and as such is subject to CALEA, which mandates providing various snooping capabilities for LEA use.

Also today, and definitely tommorow, targets are using VoIP. VoIP can be monitored by watching broadband lines, but "tapping a line" is not sufficient. The classic deficiency is call forwarding. As described at the conference today, assume a LEA is watching all broadband traffic to and from a target. If the target enables call forwarding through his VoIP provider, a LEA watching network traffic will not see a call come in if the VoIP provider forwards it elsewhere.

Therefore, gaining access to that critical information requires monitoring the service, not the line.

Extend the services to be monitored beyond VoIP. Suddenly you can probably imagine many scenarios where LEAs would want to essentially be inside the service, or able to tap data directly from the service. The line to the target is secondary. For example, why try to follow a target from Internet cafe to Internet cafe if you can just watch his chat room, Web forum, or other meeting place directly?

This seems less like Big Brother and more like Embedded Brother. Any application wich law enforcement might consider a source of data on a target could be compelled by law to provide a means for LEA to perform lawful intercept. Already we are seeing signs of this through various data retention directives. One of the conference panelists mentioned a story from Germany that makes this point. He said Germany (or at least part of it) has a system that tracks cars paying tolls. When the system was deployed it was forbidden to use such data for tracking car owners, even if crimes were committed. However, a person was run down at a toll booth. After the crime happened, an outcry erupted to use the toll logs to identify the culprit. This is the sort of "emergency thinking" that results in powers be granted to LEAs to become ever deeper into technology services.

One financial note: consider buying stock in log management and storage vendors. All of this data needs to be managed and stored.

In one of my classes I list the reasons why people monitor, in this progression:

Performance: is our service working well?

Fault: why does our service fail?

Security: is our service compromised?

Compliance: is our service meeting legal and regulatory mandates?

Many companies are still at step 2. Step 3 might be leapfrogged and step 4 might be here sooner than you think. Hopefully data collected for step 4 will inform step 3, thereby serving a non-LEA purpose as well.

Incidentally I did not hear the term encryption mentioned as a challenge for law enforcement. I'll let the conspiracy theorists chew on that one. In a service-oriented lawful intercept world, I would imagine LEAs could access data unencrypted at the service provider if end-to-end encryption were not part of the service. In other words, maybe your VoIP call is encrypted from you to the provider, and from the provider to the recipient, but the LEA can intercept at the hub of the communication.

Update: I want people to understand that me predicting this development does not mean I agree with it. I prefer privacy to what's going to happen.

Joseph Kong: I am a relatively young (24 years old) self-taught computer enthusiast who enjoys working (or playing, depending on how you look at it) in the field of computer security; specifically, at the low-level...

When did you hear about rootkits for the first time?

Joseph Kong: The first time I heard the term "rootkits" was in 2004--straight out of the mouth of Greg Hoglund, who was at the time promoting his new book Exploiting Software: How to Break Code. That's actually how I got into rootkit programming. Thanks Greg. :)

Wow. Zero to book on rootkits in 3 years -- that's cool.

Now for a bit of wisdom:

Do you know any anti-rootkit tool/product for *BSD?

I know a lot of people who refer to rootkits and rootkit-detectors as being in a big game of cat and mouse. However, it's really more like follow the leader--with rootkit authors always being the leader. Kind of grim, but that's really how it is. Until someone reveals how a specific (or certain class of) rootkit works, nobody thinks about protecting that part of the system. And when they do, the rootkit authors just find a way around it...

Contrast that with this bit of marketing:

Guess which one is correct?

Finally, I appreciated seeing this:

Keep in mind that although I am extolling the virtues of prevention, as other computer security professionals (such as, Richard Bejtlich) have said, prevention eventually fails (e.g., Loïc Duflot showed that you can bypass secure levels in SMM), and detection is just as important. The problem is rootkit detection, as I said earlier, is difficult.

At AusCERT last week one of the speakers mentioned the regular autumn spike in malicious traffic from malware-infested student laptops joining the university network. Apparently this university supports the variety of equipment students inevitably bring to school, because they require or at least expect students to possess computing hardware. The university owns the infrastructure, but the students own the platform. This has been the norm at universities for years.

A week earlier I attended a different session where the "consumerization" of information technology was the subject. I got to meet Greg Shipley from Neohapsis, incidentally -- great guy. This question was asked: if companies don't provide cellphones for employees, why do companies provide laptops? Extend this issue a few years into the future and you see that many of our cellphones will be as powerful as our laptops are now. If you consider the possibility of server-centric, thin client computing, most of the horsepower will need to be elsewhere anyway. Several large companies are already considering the "no company laptop" approach, so what does that mean for digital security?

You must now see the connection. University students are the corporate employees of the near future. If we want to learn some tricks for dealing with employee-owned hardware on company-owned infrastructure manipulating mixed-ownership data (business and personal), consider going back to college. I think we're going to have to focus on Enterprise Rights Management, which is a popular topic. That still won't make a difference if the employee smartphone is 0wned by an intruder who is taking screen captures, unless some form of hardware-enforced Digital Rights Management frustrates this attack. Regardless, I think the next corporate laptop you receive might be your last.

As technology changes the way people communicate, the legal system is stumbling to keep up. The “discovery” process, whereby both parties to a lawsuit share relevant documents with each other, used to involve physically handing over a few boxes of papers. But now that most documents are created and stored electronically, it is mostly about retrieving files from computers. This has two important consequences...

Three weeks ago I wrote about Vulnerability-Centric Security regarding the Mine Resistant Ambush Protected (MRAP) vehicle, the US Army's replacement for the Hummvee pictured at left. I consider the MRAP an example of the failures of vulnerability-centric security. This morning USA Today's story MRAPs can't stop newest weapon validates my thoughts:

New military vehicles that are supposed to better protect troops from roadside explosions in Iraq aren't strong enough to withstand the latest type of bombs used by insurgents, according to Pentagon documents and military officials.

As a result, the vehicles need more armor added to them, according to a January Marine Corps document provided to USA TODAY...

"Ricocheting hull fragments, equipment debris and the penetrating slugs themselves shred vulnerable vehicle occupants who are in their path," said the document...

EFPs are explosives capped by a metal disk. The blast turns the disk into a high-speed slug that can penetrate armor.

Even with additional armor, the augmented MRAPs will still be vulnerable. This is because attackers possess advantages that defenses cannot overcome. In April I wrote Threat Advantages, which describes the strengths I see digital threats offering.

At least John Pike understands this problem.

It's doubtful new armor can stop all EFPs, said John Pike, director of Globalsecurity, a Washington-based defense think tank.

"Short of victory, they're going to continue to figure out ways to kill Americans," Pike said of the insurgents. "In any war, it is measure and countermeasure."

Tuesday, May 29, 2007

Let me say that I wish I could give this book 4 1/2 stars. It's just shy of 5 stars, but I couldn't place this book alongside some of my favorite 5-star books of all time. Still, I really enjoyed reading Inside the Machine -- it's a great book that will answer many questions for the devoted technical reader.

At the end of the review I mention Scott Mueller's Upgrading and Repairing PCs. In a nice show of synchronicity, the chapter from Scott's book on Microprocessor Types and Specifications is available online in .pdf format.

[S]omeone once attended a presentation that I gave on penetration testing, and then contacted me a year later with an e-mail that basically said, “I finally talked a client into letting me perform a pen test. I don’t know what to do, how to do it, what to charge, or any special legal language that should be in the contract.” My response was basically, “You shouldn’t do the work...”

In today’s message, a consultant from a very large integration firm sent out a message saying that one of their clients wants to scope out integration of a NOC/SOC. He gave a very wide variety of requirements for the facility, and then wanted feedback from a wide variety of people not associated with his company. While I am normally all for helping out a colleague, this person should have either sought this info inside his own organization, which has access to such experts, or just told the client he doesn’t have a clue and to go elsewhere.

I see this problem all the time, in two forms. First, I am frequently asked to perform a variety of tasks for which I do not consider myself an expert. Blog visitors, book readers, and students sometimes expect me to be an expert in another area of security after seeing my work in network security monitoring, network forensics, incident response, and related subjects. When asked to work outside those areas, I always refer the work to colleagues whom I consider to be experts in the task in question. In return, my colleages pass me work they would prefer me to do.

Second, I know many service/consulting companies who will take any job, period. They are managed by people who only care about making "bodies chargeable," preferably over 100% for the week. (That means billing over 40 hours of work to a client, per consultant, per week.) The consultants (1) suffer silently, for fear of losing their jobs; (2) think they can become experts in anything in "10 minutes" (I hear that often); or (3) don't realize that they are clueless, and probably never will. The end result is the service delivered to the client is sub-par at best, or a disaster at worst.

I agree with Ira' last statement:

[T]he mark of a good consultant is one who knows when to turn away work.

In light of that wisdom, consider asking the following question when shopping for a consultant:

Tyrel McMahan interviewed me at CONFidence for his Sites Collide podcast. It's in QuickTime format. We talk about what smaller businesses should do with regards to monitoring and I discuss ideas from my conference presentation. Thanks to Tyrel for the interview.

Overall I would like to see some rigorous thought applied to the use of security terms. For example, a recent SANS NewsBites said:

We are planning for the 2007 Top20 Internet Security Threats report. If you have any experience with Top20 reports over the past six years, could you tell us whether you think an annual or semi-annual or quarterly summary report is necessary or valuable?

Is this another identity crisis for the SANS Top 20 (as covered in my post Further Thoughts on SANS Top 20) or is someone saying "threat" when they mean "vulnerability," or...?

We need to have our terminology straight or we will continue to talk past each other.

One of the interesting aspects of being an independent consultant is having other companies think TaoSecurity exists as a mighty corporate entity with plenty of cash to spend. This has exposed me to some of the seedier aspects of corporate life, namely "speaker-sponsorship." Have you ever attended a keynote address, or other talk at a conference, and wondered how such a person could ever have been accepted to speak? There's a good chance that person paid for the slot.

Two instances of this come to mind. First, several months ago I was contacted by the producer of a television program to appear on their show. The program was hosted by Terry Bradshaw (no kidding) and was looking for speakers to discuss the state of the digital security market. This sounded like it was almost too good to be true, and guess what -- it was. A few minutes into the conversation with the producer I learned that TaoSecurity would be expected to pay a $15,000 sponsorship fee to "defray costs" for Mr. Bradshaw, and other expenses. Essentially I would be buying a spot on the show, but it would be a "fabulous marketing experience." I said forget it.

Second, I just received a call from someone organizing a "security event." This person was looking for "experts" on PCI and other topics for briefings in September. I told him I was not available at the specified time, so he asked to be switched to the TaoSecurity marketing department since what he really wanted was "speaker-sponsors." In other words, people speaking at this event will have paid for their slots. Again, I said forget it.

Keep these thoughts in mind the next time you see a lame talk at a security conference by a marketing person.

Gunnar Peterson mentioned a few terms that, for me, brilliantly describe the problem we face in digital security. To paraphrase Gunnar, the digital world consists of the following:

Security 1.0

Web 2.0

Attacker 3.0

To that might I add the following:

Government -1.0

User 0.5

Application Developer 2.5

What do I mean by all of this?

Government -1.0: in general, hopelessly clueless legislation leads to worse security than without such legislation -- often due to unintended consequences

User 0.5: users are largely unaware and essentially helpless, but I wouldn't expect them to improve -- I'm not an automobile designer or electrical engineer, yet I can drive my car and watch TV

Security 1.0: security tools and techniques are just about good enough to address yesterday's attacks

Web 2.0: this is what is here, with more on the way -- essentially indefensible applications all running over port 80 TCP (or at least HTTP) that no developer really understands and for which no one takes responsibility

Application Developer 2.5: by this I do not mean developers are ahead of anyone with respect to security; rather, they are introducing new features and capabilities without regard to security, thereby exposing vulnerabilities no one (except intruders and some security researchers) really understand

Attacker 3.0: in Tao I said because some intruders are smarter than us and unpredictable, prevention eventually fails -- it's more true now than ever

The only way I know to deal with this problem is to stay aware of it through monitoring and to deter, prosecute, and incarcerate threats. Without Attacker 3.0 free to exploit at will without fear of attribution and retribution, I care less about these problems.

I'm a big fan of courses produced by The Teaching Company, so I bet similarly-minded blog readers might also enjoy such courses. My favorite instructor is Prof Michael Starbird. I noticed that three of his four courses are on sale until 14 June:

Monday, May 28, 2007

Since I do not run X on my FreeBSD servers, and my laptop now runs Ubuntu (heretical but productive, I know), I have not been affected by the update of X.org to 7.2 on FreeBSD. I read Updating Firefox 2 and FreeBSD 6.2 and the response Not everybody will be happy with the X.org upgrade. Basically there's a difference of opinion concerning the appropriateness of radically changing a key addition to the operating system mid-stream, i.e., during the life of 6.2.

If I were running FreeBSD 6.2 with X, I probably would have tried avoiding X.org 7.2 if possible. Losing X is a very disruptive event if the upgrade fails, and with so many ports affected it would be very invasive. I would have waited until the release of FreeBSD 6.3 or 7.0 before using X.org 7.2. Alternatively, I might have reinstalled 6.2 without X.org, and then added it and all other software as packages.

I understand the developers wanting to get X.org 7.2 into users hands as soon as possible, given the amount of work involved and their desire to have finished months ago. However, changing from a monolithic version of X.org to a modular one seems disruptive enough to have waited for coordination with the release of FreeBSD 6.3 and 7.0. I'm not a developer but that's my thoughts on the matter. I would be curious to hear how others might be handling this issue.

Sunday, May 27, 2007

I'll be teaching and speaking at the 2007 GFIRST conference in Orlando, FL in June 2007. This is pro-bono since DHS isn't paying airfare, hotel, meals, or a speaking honorarium. On Monday 25 June 2007 I'll be teaching two half-day tutorials. The first will cover Network Incident Response and the second will cover Network Forensics. On Tuesday 26 June at 1415 I will deliver the talk I gave at Shmoocon -- Traditional IDS Should Be Dead. I spoke at the 2006 and 2005 GFIRST conferences as well.

GFIRST still hasn't updated their training page to reflect my class, but I will be there teaching.

ENI is a one-day course designed to teach all methods of network traffic access. If you have a network you need to monitor, ENI will teach you what equipment is available (hubs, switch SPAN ports, taps, bypass switches, matrix switches, and so on) and how to use it effectively. Everyone else assumes network instrumentation is a given. ENI teaches the reality and provides practical solutions.

Please register while there are still seats available. My class is the day before all the six-day tracks begin. If you register before 6 June you will save $250. If you register by 27 June you will save $150. If you take this one-day class with a full SANS track my class only costs $450. Please note SANS set all of these prices and schedules.

I am happy to announce that I will be teaching a three day edition of my Network Security Operations training class in Chicago, IL on 27-29 August 2007. This is a public class, although I will be speaking at the 30 August meeting of the Chicago Electronic Crimes Task Force. Please register here. The early discount applies to registrations before midnight 27 July. ISSA members get an additional discount on top of the early registration discount.

I am happy to announce that I will be teaching a three day edition of my Network Security Operations training class in Cincinnati, OH on 21-23 August 2007. The Cincinnati ISSA chapter is hosting the class. Please register here. The early discount applies to registrations before 20 July. ISSA members get an additional discount on top of the early registration discount.

Last week the "Helpful Votes" count for my Amazon.com reviews reached the 4,000 count. I hit 3,000 in January 2006 and 1,500 in December 2003. Since reaching the 3,000 mark I've read and reviewed 55 additional books. Thank you to everyone who votes my reviews "helpful."

If you want to see what I have on my shelf and plan to read next, please check out my reading list. If you want to see the books I hope to see soon, please visit my Amazon.com Wish List.

The 2 day class I'm teaching at Black Hat on 28-29 and 30-31 July is a condensed version (2 days) of the 4 day series (broken into layers 2-3 and 4-7) for USENIX. I also plan to teach this condensed edition at ForenSec in Regina, SK in September.

Output modes are the methods by which Snort reports its findings when run in IDS mode. As discussed in the first Snort Report, Snort can also run in sniffer and packet logger modes. In sniffer mode, Snort writes traffic directly to the console. As a packet logger, Snort writes packets to disk in Libpcap format. This article describes output options for IDS mode, called via the -c [snort.conf] switch. Only IDS mode offers output options.

This is the first of two Snort Reports in which I address output options. Without output options, consultants and VARs can't produce Snort data in a meaningful manner. Because output options vary widely, it's important to understand the capabilities and limitations of different features. In this edition of Snort Report, I describe output options available from the command line and their equivalent options (if available) in the snort.conf file. I don't discuss the Unix socket option (-A unsock or alert_unixsock). I will conclude with a description of logging directly to a MySQL database, which I don't recommend but explain for completeness.

Friday, May 25, 2007

My whirlwind Australia trip is coming to a close. I'll be boarding a flight from Sydney to LAX soon. I'd like to thank Christian Heinrich and John Dale from Secure Agility for hosting me in Sydney and to everyone at AusCERT for helping me with my classes in Gold Coast.

I'd like to briefly record a few thoughts on the AusCERT conference.

Andrea Barisani gave a great talk on the rsync1.it.gentoo.org compromise of December 2003. He emphasized that preventing incidents is nice, but security monitoring and awareness are absolutely critical. I need to try his Tenshi log monitoring tool.

Mike Newton from Stanford explained his Argus infrastructure, which collects 35 GB of data per day, which he reduces with bzip2 to 11 GB per day and then 3 GB per day with custom filtering. He keeps 30 days online in raw format then compresses and stores 400 days. He watches 5 class B networks with 45,000 hosts. Based on his analysis Stanford is segmenting itself into 300 zones using virtual firewalls (?). He said that one of the important reasons to monitor with Argus is to avoid having to disclose incident details, because Argus data can show that compromise of sensitive data was unlikely or did not occur.

John McHugh (formerly of CERT) gave a great talk on network situational awareness using SiLK, right after my talk. I need to try some of the tools at the Network Situational Awareness group at CERT. I had dinner with John and I hope to do a guest lecture at some point at his school.

Cristine Hoepers from the Brazil CERT spoke on spam research using open proxy honeypots. Her talk reminded me that I should consider honeypots as a way to collect threat information in locations where monitoring production traffic is sensitive. If I monitor the honeypot only I can limit privacy complaints about seeing other people's traffic.

Sunday, May 20, 2007

I'm on the road again, en route to Gold Coast for AusCERT, followed by a public course on Network Security Monitoring in Sydney on Friday 25 May 2007. There are still seats left -- check it out if you want to attend!

Here are a few thoughts on items I read on my flight from IAD to LAX.

The latest Cisco IP Journal article on DNS Infrastructure by Steve Gibbard is awesome. Read it if you really want to understand global DNS in a few pages.

Kudos to Matt Blaze for more cool research, specifically his co-authored paper The Eavesdropper's Dilemma. If you think you're doing network forensics you need to develop a strategy to address his conclusion:

Internet eavesdropping systems suffer from the eavesdropper’s dilemma. For electronic wiretapping systems to be reliable, they must exhibit correct behavior with regard to both sensitivity and selectivity. Since capturing trafﬁc is a requisite of any monitoring system, considerable research has focused on preventing evasion attacks and otherwise improving sensitivity. However, little attention has been paid to enhancing selectivity or even recognizing the issue in the Internet context. Traditional wisdom has held that eavesdropping is sufﬁciently reliable as long as the communicating parties do not participate in a bilateral effort to conceal their messages.

We have demonstrated that even in the absence of cooperation between the communicating endpoints, reliable Internet eavesdropping is more difﬁcult than simply capturing packets. If an eavesdropper cannot deﬁnitively and correctly select the pertinent messages from the captured trafﬁc, the validity of the reconstructed conversation can be called into question. By injecting noise into the communication channel, unilateral or third-party confusion can make the selectivity process much more difﬁcult and therefore further diminishes the reliability of electronic eavesdropping.

CIO Magazine has a good article with percentages of companies not in compliance with various rules and regulations. It contains gems like:

Compliance with federal, state, and international privacy and security laws and regulations often is more an interpretive art than an empirical science—and it is frequently a matter for negotiation. How to (or, for some CIOs, even whether to) follow regulations is neither a simple question with a simple answer nor a straightforward issue of following instructions. This makes it more an exercise in risk management than governance. Often, doing the right thing means doing what’s right for the bottom line, not necessarily what’s right in terms of the regulation or even what’s right for the customer...

“We’re trying to remain profitable for our shareholders, and we literally could go broke trying to cover for everything. So, you make risk-based decisions: What’re the most important things that are absolutely required by law?”...The CISO told Taylor that she had received an e-mail from one of her programmers informing her that the school may have experienced a breach that may have exposed students’ personal information. The programmer was unsure if the law required the school to report the incident and asked the CISO for guidance.

Taylor asked her what she did. She said she wrote back to the programmer telling him not to do anything. Taylor told the CISO that the university should have reported the breach. The CISO disagreed, saying, essentially, that because very few people review system log files and because only one or two people at the university understood the systems and the data in them, it was probable that the breach would go unremarked and undiscovered...

The cost to harden the legacy database against a possible intrusion could come to $10 million, he says. The cost to notify customers in case of a breach might be $1 million. With those figures, says Spaltro, “it’s a valid business decision to accept the risk” of a security breach. “I will not invest $10 million to avoid a possible $1 million loss,” he suggests...

According to a 2006 report from the office of the United States Trade Representative (USTR), U.S. businesses are losing approximately $250 billion annually from trade secret theft. Federal law enforcement officials say the most targeted industries include biotechnologies and pharmaceutical research, advanced materials, weapons systems not yet classified, communications and encryption technologies, nanotechnology and quantum computing...

[I]t can take years until a trade secret theft is detected, says Smith: "You wouldn't even know it [your IP] was missing for five years, when a competitor would suddenly introduce a product that sold for one-third to one-fifth of the price of yours."..

For organizations that depend heavily on commercializing the product of their R&D activities, trade secrets are particularly important. Patents are equally important, but trade secrets differ from patents in a significant way. They are--as their name implies--secret. Whereas patents represent a set of exclusive rights granted by the government in exchange for the public disclosure of an invention, a trade secret is internal information or knowledge that a company claims it alone knows, and which is a valuable intangible asset.

While patent owners have certain legal protections from anyone using their patents without permission, companies are responsible for proving they have the right to legal protection of their trade secrets. According to the UTSA, your company must demonstrate that the specific information or knowledge is not generally known to the public, therefore it derives independent economic value; and that you have made reasonable efforts to make sure the knowledge remains secret.

A trade secret's validity can only be proven via litigation; there's no automatic protection just because your company believes it possesses one. Ironically, a trade secret must be stolen or compromised before you can attempt to demonstrate it is legally a trade secret. Once in litigation, your company must convince the court of three points: secrecy, value and security. Inevitably, the most difficult element to demonstrate is that your company had reasonable controls in place to protect the secrecy of the IP in question...

John Landwehr, Adobe's director of security solutions and strategy, believes that the best protection of sensitive data happens at the document level: "Given the range of devices that IP can live on--from desktops, to laptops, to PDAs and mobile phones--we think that the only viable way to persistently protect that information is if the protection travels with the document."

However, a word of caution about some of these products designed to protect confidential data: Because the vast majority are based on rule-set driven engines, the number of false positives they generate can be significant.

Friday, May 18, 2007

The slide above is from Gartner analyst Greg Young's 2006 presentation at the Gartner IT Security Summit 2006, Deconfusicating Network Intrusion Prevention (.pdf). "Deconfusicating" appears to be a fake synonym for simplifying. I bet that was supposed to confuse an IDS, but not an IPS. Funny that stopping an attack requires detecting it, but never mind.

Someone recently recommended I read this presentation, so I took a look. It's basically a push for Gartner's vision of "Next Generation Firewalls" (NGFW), which I agree are do-everything boxes that will eventually collapse into security switches or Steinnon-esque "secure network fabric." The funny thing about all those IPS deployments is that I continue to hear about organizations that utilize only a fraction or none of the IPS blocking capability, and instead use them as -- wait for it -- IDS. Hmm.

That still doesn't account for the major problem with a prevention-only mindset. Let's face the facts: there are events which transpire on the network which worry you, but which you can't reliably make a policy-based allow or deny decision. When business realities rule (which they always do) you let the traffic through. Where's the IPS now? It's an IDS.

There are also events for which you have no idea how to identify them prior to nontechnical incident detection. If you care at all about security you're going to want to keep track of what's happening on the network so you can scope the incident once you know what to look for. I call that one form of Network Security Monitoring (NSM).

At about the same time I saw the 2006 Gartner slides I read IDS in Mid-Morph, an interview with Gene Schultz, long time security veteran. The interview states:

Schultz says there are already signs of new life. For one thing, IDS data is being used as part of intelligence-collection for forensics, he says. "People are gathering a wide range of data about behavior in machines, the state of memory, etc. and combining it to find patterns of attacks.

Intrusion detection is one rendition of going more toward the route of intelligence-collection. Instead of focusing on micro-details like packet dumps, [security analysts] are looking at patterns of activity through intensive system and network analysis on a global scale, to determine what the potential threats are."

Schultz attributes this to a new breed of intrusion detection analyst, "more like an intelligence analyst, especially in the government."

I wonder if Gene read any of my books or articles? For the last five years I've defined NSM as the

collection, analysis, and escalation of indications and warnings to detect and respond to intrusions.

Chapter one from Tao is online and must say the word intelligence a dozen times.

Incidentally, if you're near Sydney I'll be teaching my NSM course on 25 May 2007. If you're near Santa Clara I'll be teaching at on 20 June 2007. Thank you.

As of that date, the minimum experience requirement for certification will be four years or three years with a college degree or equivalent life experience. The current requirements for the CISSP call for three years of experience...

The "equivalent life experience" provision is intended for mature professionals who did not obtain a college degree but are in positions where a college degree would normally be required...

You may remember these changed were announced about a month after 16 year old Namit Merchant passed the CISSP exam, according to a December 2001 SecurityFocus report.

I passed the CISSP in late 2001 as well (I was almost 30, not 16) so all I needed was three years of relevant work experience. Since 1 January 2003, you could have three years experience plus one of the approved credentials. Those include many certs from SANS, for example.

The new requirements for the CISSP, announced this week, are:

Effective 1 October 2007, the minimum experience requirement for certification will be five years of relevant work experience in two or more of the 10 domains of the CISSP CBK®, a taxonomy of information security topics recognized by professionals worldwide, or four years of work experience with an applicable college degree or a credential from the (ISC)²-approved list.

Currently, CISSP candidates are required to have four years of work experience or three years of experience with an applicable college degree or a credential from the (ISC)²-approved list, in one or more of the 10 domains of the CISSP CBK.

I am not sure why (ISC)² is increasing the experience requirement. I don't think an five years of "experience" are going to make that much of a difference when compared to four years of experience plus a degree or credential. Honestly, equating a degree with a certification like CompTIA Security+ (on the "approved list") is really a joke, or should be.

Experience is not the only change:

Also effective 1 October, CISSP candidates will be required to obtain an endorsement of their candidature exclusively from an (ISC)²-certified professional in good standing.

Currently, candidates can be endorsed by an officer from the candidate’s organization if no CISSP endorsement can be obtained. The professional endorsing the candidate can hold any (ISC)² base certification – CISSP, Systems Security Certified Practitioner (SSCP®) or Certification and Accreditation Professional (CAPCM).

This is an anti-fraud attempt. I think it is too late. From the rumblings I've heard, cheating on exams like CISSP is not uncommon. One bad apple can "earn" the CISSP and then "endorse" all his buddies.

Maybe (ISC)² is finally starting to behave like employed French workers, protecting those who already have the certification at the expense of those on the outside? In other words, are there too many CISSPs chasing too few jobs? The latest press release states:

“With an estimated 1.5 million people working in information security globally, the nearly 50,000 CISSPs remain an elite group of professionals that are leading this industry,” Zeitler said. “(ISC)² will continue to assess its certification criteria and processes, as well its examinations and educational programs, to ensure that remains the case.”

50,000! Less than five years ago the press release (ISC)² RECOGNIZES 10,000th CISSP said only 2,000 CISSPs were certified in 1999, and 10,000 was reached in October 2002.

I chose a self study route, and devoted around 2 months for the preparation. Locked myself in and had very little to no time for the family, I’d told them what I was up to, both my wife and son were very supporting. Every weekday I would dedicate 3 to 4 hours, and on weekends 5 to 6 hours for preparation. The last week before exam, I took leave from work and dedicated around 12 hours straight everyday for 7 days. To cope with the physical and mental tensions I did 45 minutes yoga in the morning and 20 minutes meditation in the afternoon. I took a break or stretched for 5 to 15 minutes after every 1 or 2 hours of studies.

That is ridiculous. I would expect someone who wants to be considered as a "security professional" to be well-enough versed in the CISSP material to not require seven straight days of 12 hour studying sessions, beyond the previous seven weeks of study.

I prepared for the test in 2001 by reading the first edition of the Krutz and Vines CISSP guide, followed by the Exam Cram the night before. That was it. No boot camp, not study marathons, no weeks of study groups. I had about four years experience and I figured that if (ISC)² required three years, I should be ok. I finished the test in 90 minutes and that was it.

Database ninja David Litchfield told me he posted the latest in a series of lengthy articles on investigating Oracle database incidents. Specifically, he asked me to review the newest article on Live Response (.pdf) given my background. I recommend checking out the whole set of articles at Database Security.

Speaking of database security, I got a chance to see Alexander Kornbrust of Red-Database-Security GmbH talk about Oracle (in)security at CONFidence 2007. His talk reminded me of comments Thomas Ptacek once made about certain software being indefensible ten years ago, whereas now we have a fighting chance with some software. After hearing Alex's talk I think Oracle belongs in the indefensible category. Oracle appears to be at least five years behind their peer group in terms of producing "secure" code.

(I put "secure" in quotation marks because I don't believe anything is really "secure," but on relative terms Oracle seems far behind those with more robust secure development lifecycles and patch response processes.)

Sunday, May 13, 2007

I just listened to my third of the Three Wise Men, Ross Anderson, courtesy of Gary McGraw's Silver Bullet Podcast. This is another must-heed. During the podcast Prof. Anderson mentioned the following:

With respect to secure software development: As tools improve, we continue to "build bigger and better disasters." That echoes a theme in my previous posts.

"If someone is going to call themselves a security engineer, then they have to learn how things fail." This means studying history and contemporary security disasters. That's an argument for my National Digital Security Board.

Prof. Anderson mentioned potential compulsory registration for security professionals in the UK as a consequence of legislation requiring the registration of bouncers at clubs. Beware such an event here. Talk about unintended consequences.

Finally, Prof. Anderson warned of vulnerabilities in Near Field Communication (NFC) technology. For goodness sake, can we slow down the deployment of fundamentally broken technologies?

By the way, not only is the excellent Security Engineering now online, the first 7 chapters can be downloaded in .mp3 format.

I just blogged about a new podcast by the first of my Three Wise Men, namely Marcus Ranum. My second of the Three Wise Men for today is Dan Geer. I just noticed his testimony to the Subcommittee on Emerging Threats, Cybersecurity, and Science and Technology last month has been published. This is another must-heed collection of smart ideas. Brian Krebs summarized the hearing in his story Nation's Cyber Plan Outdated, Lawmakers Told. Dr. Geer's testimony included this gem:

I urge the Congress to put explaining the past, particularly for the purpose of assigning blame, behind itself. Demanding report cards, legislating under the inﬂuence of adrenaline, imagining that cybersecurity is an end rather than merely a means — all these and more inevitably prolong a world in which we are procedurally correct but factually stupid.

Amen. Also:

Information security is perhaps the hardest technical ﬁeld on the planet. Nothing is stable, surprise is constant, and all defenders work at a permanent, structural disadvantage compared to the attackers. Because the demands for expertise so outstrip the supply, the fraction of all practitioners who are charlatans is rising. Because the demands of expertise are so difﬁcult, the training deﬁcit is critical. We do not have the time to create, as if from scratch, all the skills required. We must stealthem from other ﬁelds where parallel challenges exist.

I wonder if the fraction of all practitioners with CISSP certifications is rising too?

The opposition is professional. It is no longer joyriders or braggarts. Because of the sheer complexity of modern, distributed, interdigitated, networked computer systems, the number of hiding places for unwanted software and unwanted visitors is very large.

The complexity, for the most part, comes from competitive pressure to add feature-richness to products; there is no market-leading product where one or a small group of people knows it in its entirety, and components from any pervasive system tend to be used and re-used in ways that even their designers did not anticipate.

Were there no attackers, this would be a miracle of efﬁciency and goodness. But unlike any other industrial product, information systems are at risk not from accident, not from cosmic radiation, and not from clumsy operation but from sentient opponents. The risk is not, as some would blithely say, “evolving” if by evolving the speaker means to invoke the course of Nature. The risk is due to intelligent design, and there is nothing random about it.

This is why one cannot legislate "security" for computers as one could try to legislate "safety" for automobiles. If people were crushing cars with boulders off bridges, shooting out car windows with AK-47s, or running over cars with tanks, no one would be blaming car manufacturers. They would (rightly!) be blaming the threats, as we should be doing with software and digital intruders.

This morning I delivered a talk at CONFidence 2007 in Krakow, Poland. I'd like to thank Andrzej Targosz and Jacek Artymiak for being the best hosts I've met at any conference. They got me at the airport, took me to dinner (along with dozens of others), and will take me to the airport (at 0430 no less!) tomorrow. I spent a good amount of time with Anton Chuvakin, Daniel Cid, and Stefano Zanero, which was very cool.

I'd like to mention two talks. First, I watched Paweł Pokrywka talk about a neat way to discovery layer two LAN topology with crafted ARP packets. Unfortunately, his talk was in Polish and I didn't exactly learn how he does it! I spoke to Paweł briefly before my own talk, and he said he plans to release a paper (in English) and his code (called Etherbat), so I look forward to seeing both.

Second, I attended Dinis Cruz's talk on buffer overflows in .NET and ASP.NET. I'm afraid I can't say anything intelligent about his talk. Dinis is a coding ninja and I really only left his talk with one idea: all general-computing platforms can be broken. What's funny is I'm not even sure Dinis would agree with me. His point seemed to be that .NET and ASP.NET (as well as other managed code environments) are breakable, but if implemented "properly," could be made not breakable.

Let's think about that for a moment. I'm sure the people who dreamed up .NET and ASP.NET are really smart. However, there are problems that render them vulnerable to people like Dinis. "Fine," you say. "Let Dinis help Microsoft fix the problems." Ok, Dinis helps implement a new version of this framework. A year or so later someone with a different insight or skill comes along and breaks the new version. And so on. This is the history of general purpose computing. I don't see a way to break the cycle if we continue to want developers to be able to write general purpose software. I am not speaking as a developer, but as an historian. We have been walking this path for over 20 years and I don't see any improvements.

Update: I forgot to mention that I liked Anton Chuvakin's definition of forensics:

Computer forensics is the application of the scientific method to digital media to establish factual information for judicial review.

I just listened to the first episode of Marcus Ranum's new podcast Rear Guard Security. A previous commenter got it right; it's like listening to an academic lecture. If that gives you a negative impression, I mean Marcus is a good academic lecturer. These are the sorts of lessons you might buy through The Teaching Company, for example.

Marcus isn't talking about the latest and greatest m4d sk1llz that 31337 d00ds use to 0wn j00. Instead, he's questioning the very fundamentals of digital security and trying to equip the listener with deep understandings of difficult problems. Most vendors will hate what he says and others will think he's far too pessimistic. I think Marcus is largely right because (although he doesn't say this outright) he believes vulnerability-centric security is doomed to failure. (I noticed Matt Franz thinks I may be right, too.) When you realize that nothing you do will ultimately remove all vulnerabilities, you've got to improve our ability to deter, investigate, apprehend, prosecute, and incarcerate threats. (I'll say a little more on this in a future post.)

One area in which I disagree with Marcus is penetration testing. I think he might accept my position if framed properly, since he is a proponent of "science" to the degree we can aspire to that standard. In my post Follow-Up to Donn Parker Story I wrote:

Rather than spending resources measuring risk, I would prefer to seemeasurements like the following:

Time for a pen testing team of [low/high] skill with [external/internal] access to obtain unauthorized [stealthy/unstealthy] control of a specified asset using [public/custom] tools and [zero/complete] target knowledge. Note this measurement contains variables affecting the time to successfully compromise the asset.

Time for a target's intrusion detection team to identify said intruder (pen tester), and escalate incident details to the incident response team.

Time for a target's incident response team to contain and remove said intruder, and reconstitute the asset.

These are the operational sorts of problems that matter in the real world.

Yes, I did slightly modify number one to clarify meaning.

In Answering Penetration Testing Questions I added a few more comments, specifically mentioning a source like SensePost Combat Grading as an example of how to rate the [low/high] variable. That's not necessarily the standard I would use (since I haven't seen it) but it shows professional pen testers do think about such issues. (Maybe I can chat with them at Black Hat?)

Marcus defines pen testing as attempting to determine the quality of an unknown quantity using another unknown quantity and a constantly varying set of conditions. In my #1 metric I try to reduce the number of variables such that the unknown qualities are fewer. I don't think it's ever possible to eliminate those variables, because the unit to be tested (the enterprise, usually) is never in a fixed state.

That reflects the real world. The enterprise attacked on Tuesday may not be like the enterprise on Wednesday. As much as I advocate knowing your network I recognize that comprehensive, perfect knowledge, simply due to complexity but aggravated by many other factors, cannot be obtained. However, the same factors which complicate our defense can complicate the intruder's offense. Overall I do not see the problem with finding out how long it takes for a pen testing team operating within my chosen parameters to achieve a specified objective.

This is why I think there's room in Marcus' world for my point of view. I believe there is value in the outcome of these tests. In other words, a single test is worth a thousand theories. I cannot say the number of times I've dealt with security people who refuse to believe a given incident has occurred (i.e., their box is rooted, it had no patches, etc.). Once you show them data, there's no room for excuses.

If it takes 30 minutes for a pen testing team of low skill with external access to obtain unauthorized unstealthy control of a specified asset using public tools and zero target knowledge, there's a problem.

If it takes an estimated 6 months for a pen testing team of high skill with internal access to obtain unauthorized stealthy control of a specified asset using private tools and full target knowledge, the situation is a lot different! (I say "estimated 6 months" because few if any customers are going to hire a pen team for that long. It is possible for pen teams to survey an architecture and estimate how long it would take for them to research, develop, and execute a custom zero-day.)

Incidentally, I'd rather not be the guy who debates Marcus on this issue if he wants to argue with a "pen tester." I don't do pen tests for a living. If he just wants an opposing point of view, I can probably provide that.

A goal of this project is to characterize internal enterprise traffic recorded at a medium-sized site, and to determine ways in which modern enterprise traffic is similar to wide-area Internet traffic, and ways in which it is quite different.

We have collected packet traces that span more than 100 hours of activity from a total of several thousand internal hosts. This wealth of data, which we are publicly releasing in anonymized form, spans a wide range of dimensions.

I decided to take a look at this data through the lens of Structured Traffic Analysis, which I discuss in Extrusion Detection and (IN)SECURE Magazine. I downloaded lbl-internal.20041004-1303.port001.dump.anon and took the following actions.

Unfortunately, Tethereal statistics don't tell you really anything different from Tcpdstat. Usually Tethereal statistics are more informative, but not here. For the sake of comparison, here is what Wireshark GUI statistics tell you.

Notice the format is different (but more human-friendly), and there is no way to copy or save it to a file. That would be a nice feature. (Tshark shows the same output as Tethereal, incidentally.)

The next step is to let Argus parse the file and then let Argus summarize the protocols it sees.

I like creating these session combinations because they show me connections to hosts and destination ports. I can review these target ports, for example, to look for sessions which might be interesting. This is as far as we can go, because all of the application layer details for these sessions have been eliminated by the Tcpmkpub anonymization tool.

At some point I plan to update this methodology using Argus 3.0, and automate the process.