A commenter over at the Small Wars Council thought my theory about the possible motive of the Iranian Metasploit hijinks would make for a good movie–but, I assume, not the most credible analysis. First, typing commands in to msfconsole is a little hard to dramatize on screen. About the closest we’ve come to making the command line sexy was having Trinity from The Matrix run an nmap scan and a fictitious SSH exploit, and Trinity did it wearing a leather outfit (see article and YouTube clip*). The real perpetrator may be doing it unshaven and in a bathrobe. At least, that’s how I do my best work. Secondly, I am, like, so totally serious about my theory of someone more interested in disrupting intelligence agencies than Iran’s nuclear program. Here’s why:

There are certainly credible reasons why a professional intelligence agency would bang away in Iranian networks with Metasploit. If the Iranians are shutting down key parts of their network (I don’t know how vital the automation bits mentioned in Mikko’s piece are) to do forensics to figure out how the attacker is getting in, maybe blasting “Thunderstruck” is the next best thing to some fancy exploit to ruin centrifuges. Or, perhaps, some group who wants to disrupt Iran’s nuclear program is flooding them with garbage attacks to overwhelm Iranians attempts to analyze their more ‘long-term,’ targeted malware. That analysis takes time and personnel who are in short supply even in the U.S. Think of it, to borrow a phrase from one of my brilliant friends, Federico Rosario, as “a DOS attack on skilled personnel.” Others have mentioned playing “Thunderstruck” as a kind of psychological warfare on trust in terms of Iranian infrastructure.

However, these types of attacks seem every bit as likely to disrupt professional intelligence agencies’ access as help them in some way. I also am unimpressed with the PSYOPS theory, because (1) this has already been accomplished via previous malware and (2) announcing one’s presence contradicts the IC’s modus operandi in terms of being able to discretely collect information and disrupt systems. That’s why I think there is another motive at work here. The reported worm and Metasploit hijinks may even be two separate actors.

—

* – Funny enough, that little 1:09 clip dramatizes pretty much every policy maker’s fear of an infrastructure attack on the US

As I tweeted a day ago, Mikko Hypponen had an interesting blog post in which he discusses an email from a scientist working at the Atomic Energy Organization of Iran. The details are a little unclear, but the claim is that some mix of a worm, Metasploit, and hacked computers blasting AC/DC at all hours of the night has been disrupting two nuclear facilities within Iran. The AC/DC bit–“Thunderstruck,” as a matter of fact–has attracted the most attention.

It is hard to get a clear picture of what is going on, because there’s really two separate issues: this worm and the possible use of Metasploit. Metasploit, of course, is not a virus; it’s an exploitation framework. Download it here if you’re curious. HD Moore, Metasploit’s creator, tweeted:

If the e-mail to Mikko Hyponnen is truthful and accurate, this strikes me as the act of an amateur–not a state, much less the US. Moreover, the fact that there is no effort to be covert makes me think this is a grand middle finger to US and other intelligence agencies. It is as if the perpetrator is saying, “You developed malware and cryptographic attacks over the course of years to penetrate computers relevant to the Iranian nuclear program; I did it downloading an app freely available to anyone.” They probably even used a commonly available exploit, too. I can’t see someone burning a 0-day to blast “Thunderstruck” to some Iranian engineers just for, as the kids say, “the lulz.”

If I had to ‘profile’ the perpetrator, I would suggest a lone male with a grudge or grievance with one or more US intelligence agencies (perhaps a past applicant). If there is a political motive, I would suggest someone affiliated with Anonymous or other like-minded group who might think disrupting Iranian networks would mean disrupting any ongoing US intelligence operation. Either way, the objective in my view is disrupting or discrediting US efforts rather than Iran’s nuclear program. That’s pure speculation, but that is the impression I get.

Dave Aitel of Immunity delivered a talk at the 20th USENIX Security Symposium, which built on the in-progress talk I discussed here and here. It is worth watching and attempting to understand. This is not a 100% endorsement, but there are a few lessons here that transcend the so-called “cyber” domain and apply to strategic thinking about technology in some really profound ways.

The first is the fact that attackers win and defenders loose is not a feature of cyber war but because the attackers have a better strategy. To start, Aitel attributes lays out a number of defenders’ excuses for why attackers are winning: inadequate resources, attackers “only have to be right once,” users are easy targets, etc. Aitel continues, “They keep saying its asymmetric, because they made a strategic choice and lost.” This goes to Rupert Smith’s point about asymmetric warfare–that it is a phrase “invented to explain a situation in which conventional states were threatened by unconventional powers but in which conventional military power would be able of both deterring the treat and responding to it.”* Rather than challenge the strategy, defenders have redefined the environment to allow for their failure.

Secondly, this poor strategy comes from cultural and technological weaknesses–and technological weaknesses are really cultural weaknesses. This is a case I have been trying to make for the last four years on everything from small arms design to cyber war, but Aitel does it with superior technological knowledge and better smart ass commentary.** In terms of “cyber warfare,” Aitel says defenders are unwilling to say no to insecure systems or designs (e. g. most browsers and SSL VPNs, he argues). This itself is not very shocking. Spend one day as an IT consultant with an interest in security, and you will get push-back when you ask users to change their behavior. Aitel goes further and says that the whole process–the whole human process–for designing and implementing security is broken. As I wrote at the council, “After all, when someone writes an exploit or takes advantage of some misconfiguration in a network to gain or deny access, they are attacking humans and human processes ultimately. The medium–a wireless network, an embedded device, whatever–is inconsequential.” I point this out, because it is relatively easy for me to say this; my technological understanding of offensive techniques is modest at best, and attacking networks (much less making attack platforms) is not my business. Aitel is in the business of finding, writing, and selling exploits–and he’s telling you he’s winning because the way humans approach security is broken, not because of some whiz-bang widget.

On the opposite side of this human equation, attackers are, as Aitel says, “mature, self-organizing, [and] highly motivated.” Do you think government’s recent approach to USCYBERCOM, etc. is “mature?” Government functionaries are still waiting on wonks to hand them a piece of doctrine that will most likely be wrong before they act. It reminds me of what Boyd said: “[I]f you have one doctrine, you’re a dinosaur.” We are standing up dinosaurs, and this is a fundamental cultural problem.

What concerns me more is how these cultural problems transcend this “cyber” domain. Do we have our money invested in the right technology in terms of engaging near-peer competitors whether its another aircraft carrier vulnerable to ASBM attack or some other high dollar system? Do we examine flaws in human processes throughout Defense like the failures to address insider threats like Nidal Hasan or Bradley Manning? How does, say, our strategy in Afghanistan rate in terms of maturity compared to that of the Taliban?

I would still like to see his talk written up into a larger work, but it is well worth the effort for defenders–no matter what the domain–to consider Aitel’s challenge: “Attackers win because they have better strategy. The problem is not intractable.” Now, as Big Boi once quipped, “Go on and marinate on that for a minute.”

** – For example, “Why are these browsers not written in Java? Why is that? It’s retarded.” This is the hacker equivalent of Boyd saying, “I’ve never built an airplane before, but I could fuck up and do better than this.” I’m not 100% certain about Java, but I love this comment.

In Mikko Hypponen’s fantastic TED talk, there were two big takeaways. First, we must be prepared those times when–not if–hackers will be able to break systems (perhaps even the system) in which we live and work. This is not simply a matter of low-tech alternatives (although that is not a bad idea) but also making sure our technology is resilient. Secondly, those on the side of law and order must find those who are about to become cybercriminals, as Hypponen says, “with the skills but without the opportunities” and co-opt them into using their skills for good.

While I could not agree more with these two priorities, I do not share Hypponen’s optimism that they will be addressed. In terms of resilience, the start of the rebooted Battlestar Galactica in which humanity is annihilated through an enemy exploiting vulnerabilities in complex, hypertechnological military systems seems completely plausible to me. (The miniseries should be required viewing for RMA kool-aid drinkers.) In terms of recruiting those on the verge of becoming cybercriminals or, indeed, cyberguerrillas like Anonymous, I see an outcome that is even less hopeful than the Cylons’ onslaught. We are failing–miserably–at co-opting talent.

There are a lot of reasons for this, but one of the most important requires broaching an uncomfortable subject. Earlier in the month, Robert Graham of Errata Security made a provocative claim that, while white hat hackers on on the side of the “law,” they are not on the “side of law enforcement” or, as Graham puts it, “order.” He goes on to explain:

The issue is not “law” but “order”. Police believe their job is not just to enforce the law but also to maintain order. White-hats are disruptive. While they are on the same side of the “law”, they are on opposite sides of “order”.

During the J. Edgar Hoover era, the FBI investigated and wiretapped anybody deemed a troublemaker, from Einstein to Martin Luther King. White-hats aren’t as noble as MLK, but neither are white-hats anarchists who cause disruption for disruption’s sake. White-hats believe that cybersecurity research is like speech: short term disruption for long term benefits to society.

I have personal experience with this. In 2007, I gave a speech at the biggest white-hat conference. It was nothing special, about reverse engineering to find problems in a security product. Two days before the speech, FBI agents showed up at my office and threatened me in order to get me to stop the talk, on (false) grounds of national security. Specifically, the agents threatened to taint my FBI file so that I could never pass a background check, and thus never work for the government again. I respond poorly to threats, so I gave the talk anyway.

I point this out because it so aptly proves my point. I am not on the side of law enforcement, because law enforcement has put me on the other side. One of the requirements (from the above post) to volunteer is to pass a background check — a check that I can no longer pass (in theory). I cannot volunteer to train law enforcement because they perceive me as the enemy.

This is exactly why I am so dire about recruitment. First, there is a distinctly libertarian bent throughout hacker culture suspicious of government and resistent to impingement of freedoms as far flung as free speech and fair use of digital media. This, as Graham argues, puts those inclined to respect the “law” against “order.” Secondly, abuses do more to create cybercriminals than curtail them.

This got me thinking about David Kilcullen’s idea of “the accidental guerrilla”–that, in a counterinsurgency, even the slightest misapplication of force or failure to understand the complexities of one’s operating environment (culturally or otherwise) may lead to the exponential creation of insurgents. Misinterpretation of this idea has caused many to come to the conclusion that less force is always better, but Kilcullen does not suggest this. Similarly, it is not simply that the U. S. has begun to project force through this crudely defined “cyber” realm but rather that it does so without any understanding of its human terrain.

I am throwing some counterinsurgency buzzwords around too flippantly; thinking about a population-centric cyberwarfare would be a useful lens, but there needs to be a long hard look at past failures in addressing those Americans previously labeled as insurgents–for example, the Civil Rights Movements as Graham so aptly notes. There also needs to be a look at the “short-term disruptions” that Graham touches on with the context of cyberguerrillas as well as counterinsurgency practice at large.

I am not purporting any of this to be new or even my own; I am sure folks like John Robb have been connecting these dots for a long time. However, I am flagging this as an issue that needs more attention.

In the thick of my dissertation, I have not had much on my mind aside from a very particular ‘brand’ of counterinsurgency. However, the recent “cyberattack” on Lockheed Martin has me thinking about the issue again. Bruce Schneier has a good rundown of several stories on the attack, but there is a lot of rumor and speculation right now. What might a foreign actor aim to gain from an attack on Lockheed? Perhaps, an individual could learn about the classified capabilities of a fighter jet and provide those in order to design better air defense capabilities for one or more clients. The act could potentially save a U. S. adversary millions while costing the U. S. billions in the acquisition of a fighter facing obsolescence due to poor information security. In this speculative scenario, cyberwar would be an asymmetric capability, right? In a work-in-progress presentation entitled “The Three Cyber-War Fallacies,” Dave Aitel says no.

Aitel, a veteran of the NSA and CEO of Immunity, Inc., seeks to debunk the following three claims:

1. Cyberwar is asymmetric.
2. Cyberwar is non-kinetic.
3. Cyberwar is not attributable.

These are all provocative claims worth examining, but the first is the most provocative in my view. Despite the low cost components of cyberwar, there are two “carrier class” expenses that go unaccounted: maintenance and analysis. The argument has me intrigued, but I would like some more elaboration here. Perhaps, we will read more as Aitel works through how challenging these three–as he sees them–misconceptions will shape changes in policy and technology.

At any rate, read through the presentation. It makes for interesting reading even in its unfinished form.

As you have likely read, YouTube has pulled selected videos featuring Anwar al-Awlaki under pressure from the American and British governments. Pauline Neville-Jones, the British Minister of Security, argued that the material is a major component of recruitment and radicalization, providing an impetus for acts of terror and should be pulled. In response, Adam Rawnsley of Danger Room argues that removing the videos “is a losing battle” and that “Britain and America would be better off addressing the content of jihadi media with similar urgency to its distribution.” Even if the material is made unavailable on YouTube, there will be other sources for distribution including sites dedicated to counterterrorism such as this one. Howard G. Clark of FREEradicals goes even further. In “10 Reasons Why Blocking Awlaki Youtube Speeches is Counter-Productive” (HT “Thoughts of a Technocrat“), he suggests that blocking the message adds credibility, prestige, and attention to individuals such as Awlaki. It is as if being blocked is itself a force multiplier. While I did not agree with all Clark’s points, two struck me:

6) Front page news will also make Awlaki seem like an ideological pinnacle to English speakers susceptible to radicalisation, when in fact his lectures—although slick, simple, and in easy-to-understand colloquial Americanized English—reek of academic slothfulness, lack of historical understanding, and a sophomoric education on Islam’s original texts.

7) Over the past four years over two dozen terrorist attack plotters were found to have viewed Awlaki’s videos before their planned attacks. But not in one case is there proof that his speeches actually inspired these conspirators. It may be more logical that those already considering violent extremism would naturally watch his and other videos. Listening to Awlaki may be a symptom instead of driver of radicalisation.

This made me wonder whether or not removing the videos was beneficial from the viewpoint of combating terrorism. In point 6, Clark implies that there an open space for constructing a counternarrative. By leaving the more radical Awlaki videos online, we can exploit the weaknesses in his argument and pose a viable alternative. In fact, simply removing the videos may sabotage our counternarrative from the beginning, giving radicals ammunition to say, “See, they talk about ‘freedom’ when all they really want to do is silence opposition [as they do in regime X, regime Y, etc.]” At the very least, we need to know what radicals are saying to combat their message. In point 7, he suggests that removing the videos constitutes a failure to address the underlying causes of Jihadi radicalization rather than a mere “symptom.” From a COIN perspective, American interests may be better served in acknowledging and addressing select grievances in Awlaki’s message rather than silencing the messenger. To me, removing the video seems to be the digital equivalent of counterterrorism without the COIN.

Many may object that the U. S. should not cede the Internet to terrorists. Certainly, I do not advocate ‘ceding’ the Internet. Rather, we should engage an ideological contest rather than ‘cat and mouse’ technological battle with terrorists doing what is essentially a denial-of-service attack against sites that host their message via lawfare, government pressure, or offensive ‘cyber’ action. However, I wonder if this approach isn’t one method to separate the population from insurgents in the 21st century. What, then, is the proper balance between denying terrorists a soap box and countering their message? What are your thoughts and concerns?

There have been a number of excellent pieces on the cyberwarfare dimension in the ongoing conflict between Georgia, the separatist regions of South Ossetia and Abkhazia, and Russia. Here is a partial list:

After looking through photos of charred bodies among the detritus of war (via Danger Room), it might be easy to dismiss the significance of cyberwarfare. However, one should remember the question is not whether an unavailable service or defaced website outweighs the human cost of war but rather how cyberwar fits into its larger scope.

On a tactical level, there are a number of questions we can ask. Can cyberwarfare play a role in psychological warfare? Will it disrupt “network-centric warfare” and battlefield communication? How does it serve intelligence gathering? Certainly, cyberwarfare has had an impact in the propaganda battle (for example, see John Little’s post “South Ossetian Separatist Propaganda On the Web”). Moreover, cyberwar’s ability to capture the public imagination–as well as that of the military establishment–is itself a force multiplier whether cyberwarfare is media-generated hype or not. Even if its threat has been overestimated, perceptions within the US, Russia, China, and elsewhere have led to resources being devoted to this mode of warfare that might have been devoted to conventional weapons. This fact alone illustrates that the cultural impact of a particular weapons system can exceed its destructive capacity.

What if culture–the “human terrain”–is the primary battlefield of cyberwar, not cyberspace? This could explain the failures the U. S. military’s attempts “dominate” cyberspace, a notion more in line with Revolution in Military Affairs (RMA) doctrine than the more “culturally-orientated” Counterinsurgency (COIN) theory. This brings me to John Robb’s post in which he discusses the advantages of cyberwarfare:

Deniability. Offensive operations by government computers/personnel against a target nation is an act of war. Actions by civilian vigilantes is not and can be disowned. An inability to point to a an offending organization can make blame difficult to affix: note the speed at which the US tech press was willing to deny a Russian cyberwar against Estonia.

A huge talent pool. Rather than spend money on training a limited number of uniformed personnel (likely poorly), it’s possible to draw on a talent pool of hundreds of thousands of participants (from hackers to IT professionals to cybercriminals). Given the rapid decay/turnover in skills, high rates of innovation, high compensation, and the value of real-world expertise, the best people for cyberwarfare don’t work (nor will they ever) in the government. The best you can do is rent/entice them for a while.

Access to the best Resources/Weaponry. The best tools for cyberwarfare are developed in the cybercriminal community. They have vast and rapidly growing capabilities: a plethora of botnets, worms, compromised computers within target networks, identity information, etc. Further, these capabilities are cheap to rent.

With these three advantages in mind, a DDoS attack may have more in common with insurgency/counterinsurgency tactics than “shock and awe.” First, cyberwarfare has more in common with covert action–or perhaps “overt covert” action–rather than relying on the spectacle of rapid dominance. Combatants are difficult to combat, and attacks are hard to recognize. A website slowed with regular usage or down for maintenance could trigger fears of cyber attacks, analogous to the power outages in the United States that stimulated worries about terrorism. Secondly, this “huge talent pool” is not an organized, hierarchical army but rather an insurgency. Actors are as much unconnected as they are interconnected, defying the grasp of “full-spectrum dominance.” Lastly, the best resources and weapons are not the product of the most advanced military-industrial establishment but a criminal underground–and they are cheap, easy to use, and available to anyone.

Robb goes onto make great points on why the United States fails at cyberwarfare and what should be done to establish a cyberwarfare capability:

Engage, co-opt, and protect cybercriminals. Essentially, use this influence to deter domestic commercial attacks and encourage an external focus. This keeps the skills sharp and the powder dry.

Seed the movement. Once the decision to launch a cyberattack is made, start it off right. Purchase botnets covertly from criminal networks to launch attacks, feed ‘patriotic’ blogs to incite attacks and list targets, etc.

Get out of the way. Don’t interfere. Don’t prosecute participants. Take notes.

For these reasons, cyberwarfare should be something left to the intelligence community, equipped with an Internet connection and a cultural awareness of hackers and the intended target, rather than the Air Force with its outmoded RMA high-technology fetish.

“If by chance you were to ask me which ornaments I would desire above all others in my house, I would reply, without much pause for reflection, arms and books.”
—Fra Sabba da Castiglione, Knight of St. John