Builders usually represent the glass as half full. While recognizing the seriousness of vulnerabilities and dangers in current practice, they are generally optimistic people who believe that by advancing the state they can change the world for the better.

Breakers usually represent the glass as half empty, and are often so pessimistic that you wonder, when listening to some of them, why the Internet hasn’t totally collapsed already and why any of us have money left unpilfered in our bank accounts. Their pessimism leads them to apply the current state of the art to exposing weaknesses and failures in current approaches.

I remembered I had seen something like this before and wrote On Breakership in response. However, back then the debate seemed to center around calling people who helped create and defend systems as "builders, while labeling people who exploited or at least tested systems as "breakers." Mark seems to have dismissed people who "break" systems in order to improve security, while praising builders as people who stay "optimistic." I don't think this is fair. My post Response to Is Vulnerability Research Ethical? explains my position, which is essentially that Offense and Defense Inform Each Other.

Next, in a section titled Clouds and Web Services to the Rescue, Mark describes how centralized data storage for his 6 home PCs at Amazon S3 is great for security. Unfortunately, all he is really showing is that there is value in offsite storage. Storing data at Amazon S3 doesn't help much when those 6 systems are part of Calin's botnet in Romania. This is an example of focusing on one aspect of security (availability) while ignoring the other parts (confidentiality and integrity). Don't get me wrong -- I think cloud storage is great and I use a variety of services myself. However, it only helps with one aspect of the security landscape, and if not properly utilized introduces other vulnerabilities and exposures not found in other models.

Next Mark talks about using cloud services for data analysis.

Event logs can provide an incredible amount of forensic information, allowing us to reconstruct an event. The question may be as simple as which user reset a specific account password or as complex as which system process read a user’s token. Today there are, of course, log analysis tools and even a whole category of security tools called Security Event Managers (SEMs), but these don’t even begin to approach the capabilities of supercrunching. Current tools run on standard servers with pretty much standard hardware performing relatively crude analysis...

[T]he power and storage that is now available to us all if we embrace the new connected computing model will let us store vast amounts of security monitoring data for analysis and use the vast amounts of processing power to perform complex analysis. We will then be able to look for patterns and derive meaning from large data sets to predict security events rather than react to them. You read that correctly: we will be able to predict from a certain event the probability of a tertiary event taking place. This will allow us to provide context-sensitive security or make informed decisions about measures to head off trouble.

Does Mark mean that the real problem we've had with detecting and responding to security events is a lack of processing power? Good grief. I hear thoughts like this quite often from people who don't actually detect and respond to security incidents. Even academic security researchers in their ivory towers are probably laughing at Mark's angle. "Oh, you're right -- we've just been waiting for a supercomputer to run our algorithms!"

Mark then talks about using Business Process Management (BPM) software to improve security:

When security BPM software (and a global network to support it) emerges, companies will be able to outsource this step not just to a single company, in the hope that it has the necessary skills to provide the appropriate analysis, but to a global network of analysts. The BPM software will be able to route a task to an analyst who has a track record in a specific obscure technology (the best guy in the world at hacking system X or understanding language Y) or a company that can return an analysis within a specific time period. The analysts may be in a shack on a beach in the Maldives or in an office in London; it’s largely irrelevant, unless working hours and time zones are decision criteria...

This same fundamental change to the business process of security research will likely be extended to the intelligence feeds powering security technology, such as anti-virus engines, intrusion detection systems, and code review scanners. BPM software will be able to facilitate new business models, microchunking business processes to deliver the end solution faster, better, or more cheaply. This is potentially a major paradigm shift in many of the security technologies we have come to accept, decoupling the content from the delivery mechanism. In the future, thanks to BPM software security, analysts will be able to select the best anti-virus engine and the best analysis feed to fuel it — but they will probably not come from the same vendor.

Again, this is so detached from reality, I am curious how anyone could think this is possible. Mark works for Microsoft. Would you ever imagine Microsoft pivoting on a dime to "select the best anti-virus engine and the best analysis feed" -- or would they stick to their own product, because it's their own product? What about your company -- have you witnessed the organizational inertia associated with any IT product or system?

How about trust factors? What if "the best guy in the world at hacking system X or understanding language Y" works in a country with a reputation for industrial espionage? What if that guy was just hired by a competitor, or is working for a competitor now? How long does it take outside help to become familiar with the aspects of your business that eventually determine success? There's a reason why companies are not collections of free agents working independently.

Mark's last section talks about social networking for the security industry, talking about how people should share what they know. There are indeed certain collaborative forums where this works, but you are seldom if ever going to find any serious company telling other companies how their security defenses work, how they fail, and what is lost as a result of that failure. Individual collaboration occurs, but there could be severe consequences for a security staff member who unloads specific technical security information to a social network. The most productive associations that currently exist are found in certain private mailing lists, associations of peer companies that sign mutual nondisclosure agreements, and individual exchanges among peers.

Mark is a smart guy but I think his prognosis for the security industry in his Beautiful Security chapter are largely incomplete and unrealistic.Richard Bejtlich is teaching new classes in Las Vegas in 2009. Regular Las Vegas registration ends 1 July.

6 comments:

"Does Mark mean that the real problem we've had with detecting and responding to security events is a lack of processing power? Good grief. I hear thoughts like this quite often from people who don't actually detect and respond to security incidents. Even academic security researchers in their ivory towers are probably laughing at Mark's angle. "Oh, you're right -- we've just been waiting for a supercomputer to run our algorithms!"

Funny, I didn't read it that way. I thought he was suggesting that there are advanced analytical tools that we don't use to our advantage.

Ed Bellis' post on using BI tools in Infosec is a good start to introduce yourself to the concept.

I always enjoy learning how other people employ Amazon S3 online storage. I am wondering if you can check out my very own tool CloudBerry Explorer that helps to manage S3 on Windows . It is a freeware.

I can't even begin to understand that second excerpt about connected computing. It's like saying the cloud will save us, without any context or understanding what the cloud is; just that it must be awesome and advanced analysis tools are just waiting to be used...somehow...

It makes me envision the mad rush years ago to get web pages (or blogs up). Many people had these grand ideas but then got completely turned off because you can't just install Apache on Windows and now you're hosting! Instead, we had to wait a few years for self-serve tools to come out like MySpace. That's what I see with "cloud" log management or analysis. Nice concept, but with such generic pick-and-choose plugins that no one can make it look/behave halfway decent.