Debunking The Conficker-Iranian Nuclear Program Connection

Recent claims allude to Conficker-Stuxnet relationship, but are they really credible?

Toward the end of last week, a number of industry colleagues pointed me in the direction of some emerging news stories that cited the purported research of John Bumgarner of the U.S. Cyber Consequences Unit in Washington, D.C. The numerous articles covering his research suggest that the Conficker worm was unleashed by the perpetrators of Stuxnet in order to provide an initial entry vector onto Iranian systems located at the Natanz nuclear fuel enrichment plant, in addition to providing a smoke screen designed to “mask” the real nature of the operation.

Further, reports citing Bumgarner’s research go on to claim that when Conficker was released, its primary mission was to identify IT assets that were “strategic Iranian facilities” and mark them accordingly, presumably for later seed infections of Stuxnet.

Wow. Where to begin.

I’ll first say I’m a big fan of well-grounded research that helps further our understanding of the supply chain that led to the creation of notable threats. Back at Black Hat USA 2010, my Stuxnet talk was actually more about methods through which we can identify relationships between components of a given threat in order to bring us closer to a profile (or even the identity) of those responsible for its creation. Ironically, Conficker was actually one of the other samples I’ve frequently used to provide a contrast to Stuxnet, specifically when it comes to the quality of the code.

What I essentially demonstrated was that while much of Stuxnet was highly sophisticated, it had several components that, unlike Conficker, were really poorly written. I digress.

Many alternative Stuxnet theories have been presented since it emerged into the public domain in July 2010. But this one particularly caught my attention because it plays on some common misconceptions that still remain among the status quo with regard to Stuxnet. As reported by the press, Bumgarner’s theory appears to be hinged on the following premises:

Conficker was used as a smokescreen and intended to “hunt down” assets associated with the Iranian nuclear program, doing no damage to infected systems: Many fail to grasp the idea that anytime a system becomes infected with an unknown component, many organizations will quite correctly consider it to be no longer trusted, and therefore require an effort to remove the infection, or more likely completely reinstall the impacted asset. This costs money, and regardless of whether the threat was actually proactive in causing damage, any infection is, by nature, damaging. At the high end, Conficker is estimated to have potentially infected as many as 35 million devices. That’s a lot of collateral damage when you consider clean-up efforts and other secondary costs associated with responding to the threat. Even if you don’t buy into this, as Rik Ferguson at Trend Micro correctly points out, Conficker was leveraged to both manifest botnets and spread fake antivirus software to victims systems, which, in turn, were used for other nefarious purposes.

Both Stuxnet and Conficker demonstrated significant technical sophistication: It's true that Conficker and Stuxnet donned features that were either comparatively sophisticated or wholly without precedent. Other technical similarities also exist, including use of MS08-67. However, Stuxnet and Conficker have more differences than they do commonalities. One of the major issues that plagued Stuxnet was its use of a highly trivial and fragile command-and-control (C&C) mechanism (something that Duqu improved on significantly).

Conficker, on the other hand, utilized a much more sophisticated C&C mechanism and significantly more robust update functionality through its use of cryptography. Further, various code quality metrics that I ran back in 2010 clearly demonstrated that there is an extremely low likelihood that either threat were authored by the same group of individuals.

Conficker was Stuxnet's “door-kicker”: Quite simply, Stuxnet didn’t need a “door kicker,” especially not at the cost of tens of millions of Conficker infections. We know that the authors of Stuxnet had intimate, insider-level knowledge of and likely physical access to the Natanz fuel enrichment plant. Is there a chance that a number of Stuxnet-infected systems were also coincidentally infected with Conficker at some point in time? Of course! Is it likely that that’s how Stuxnet got there? Bzzzzt.

I’ll close by saying that neither John Bumgarner nor the U.S. Cyber Consequences Unit have released any formal, technical research papers supporting a Stuxnet/Conficker link, but I would absolutely urge them to do so should they remain confident in their theory. Until then, though, this one is going into my growing pile of Stuxnet conspiracy theory fails.

Published: 2015-03-31The build_index_from_tree function in index.py in Dulwich before 0.9.9 allows remote attackers to execute arbitrary code via a commit with a directory path starting with .git/, which is not properly handled when checking out a working tree.