Archives For
malware

One of the big new trends on cybersecurity blogs is to point out that people selling software for botnets and offering hosting plans that allow them to stall any attempt to shut you down enough to let you reset your operation if it’s eve caught, are really, really customer friendly and offer a quality of service that we wish most big companies tried to emulate. Somehow, we are supposed to be shocked that not only is the malware ecosystem so well organized, but that it’s so easy for people to set up botnets, spam operations, and exploit kits and that all those packages come on a digital equivalent of a silver platter, delivered by an evil cyber-Jeeves committed to making the botnets of your dreams a reality. But what else should we expect? Hacking takes some skill and you need experienced programmers and network admins to find new exploits. There aren’t that many people out there capable of building really potent malware and the demand for them is off the charts, meaning easy money to be made if they sold it to legions of criminals.

But the services are inherently illegal, some of the customers are very, very dangerous, as in a wing of the Russian mob, or the Yakuza, and the only way to effectively sell is through a happy customer who hasn’t ordered a hit on you after you sold them an exploit kit. So of course you’re going to do all you can to ensure excellent customer service. Not only does it bring in money but it boosts sales, exactly like in any other line of business. And again, it’s really important to point out that your typical angry customers have little recourse besides yelling at a manager of a call center across the world for an hour, but the people who spend tens of thousands of dollars on a brand new Zeus or BlackHole platform and thousands more per month on their malicious C&C server farm would have other means to voice their dissatisfaction. To stay in business you must a) keep them happy, b) give them what they want, and c) cover their asses as much as you can because if they’re going down, you may be going down with them. It would be more shocking if the malware industry wasn’t as polished and professionalized as it is today…

Recently, computers at two power plants were found to have been infected by three viruses that came from compromised USBs, all three easily detectable by up to date anti-virus software, and both infections were easily preventable if the plant operators followed the simplest cybersecurity procedures. If our infrastructure was ever to be the victim of a powerful cyberattack, the exploits’ success wouldn’t be so much a testament to the skills of the hackers as much as they would be indictments of the shoddy practices by those who simply don’t understand how to secure critical systems and don’t care to learn. Very few attacks we see out in the wild are truly brand new and very sophisticated like Stuxnet, Duqu, Flame, Gauss, and Red October. Most target unpatched, poorly secured systems with easily exploitable administrator accounts or out of date servers and database engines, attacks on which have been all but automated by simple PHP scripts. If you’re wondering how Anonymous can topple site after site during an op, now you know.

For example, take the pillaging of Stratfor. How did Anons get into their system? By using easily crackable default passwords and reading databses that were never encrypted. What about the huge data leak from Sony in which hundreds of thousands of accounts were compromised? An unpatched server provided a back door. Periodic leaks of credit card numbers from point of sale systems you find at local bars and restaurants? Out of date operating systems exposing admin accounts to external systems as is a typical industry practice. The ability to get into AT&T users’ account just by typing the right URL? Total absence of security checks on the company’s sites, checks that should’ve been tested before the sites ever went live. I think you get the point. Keep up with the virus definitions, patches, updates, test your software, don’t let external systems run as administrators on your network, and don’t stick random USBs into mission critical computers. If you don’t follow these elementary practices, you, quite frankly, are begging to be infected and hacked, and considering that we basically live on the web today, that’s just reckless.

While reporting about cyberwarfare and information security has been getting better and better as of late, there are still some articles that posit baffling ideas about how to prevent a massive cyber attack launched by a government. The strange idea in question this time is one which has a good starting point, but ends up imagining cyber attacks as one would imagine a conventional siege, somewhat reminiscent of The Battle of Thermopylae. Rather than envisioning an attack from the cloud able to hit a target out of the blue, it tries to portray network topologies as a kind of unseen battlefield on which one side can gain an advantage by exploiting the landscape…

Cyberspace depends on a physical infrastructure of computers and fiber, and this physical infrastructure is located on national territory or subject to national jurisdiction. Cyberspace is a hierarchy of networks, at the top of which a small number of companies carry the bulk of global traffic over the Internet “backbone.” International traffic, including attacks, enters the United States over this “backbone.” The backbone is a choke point, relatively easy to defend, and something that the NSA is already intimately familiar with (as are the other major powers that engage in signals intelligence). Sit at the boundary of the backbone and U.S. jurisdiction, monitor and intercept malware, and attacks can be blocked.

Technically yes, you can use the main switches where the fiber stretching across the oceans will reach your shores and have a deep packet inspector check the headers of incoming packets to flag anything suspicious. But this really only works for relatively straightforward attacks and can easily be avoided. If you’re trying to inject a worm or a virus into a research lab’s computer, you’ll have to get through an anti-virus system which will scan your malware and compare its bytes to as many virus and worm signatures in its database as it reasonably can. With the sheer amount of malware out there today, these tools are good at stopping existing infections and their mutant versions. However, brand new attacks require reverse engineering and being ran in a simulated environment to be identified. This is how Flame and Gauss went undetected for years and they were most likely not even spread via the web, but with infected flash drives, meaning that efforts to stop them with packet inspection would’ve been absolutely useless.

A deep packet inspector sitting at MAE-East or MAE-West exchange points (or IXPs) would have to work like an anti-virus suite if it is to do what the author is proposing, so it can stop someone from downloading an obvious virus or bit of spyware from a server in another nation or deny an odd stream of packets from China or Iran thought to be malicious, but it’s not a choke point in any conventional sense. IXPs are not in the business of being a traffic cop so having them take on that role could have serious diplomatic repercussions, and aggressive filtering could have all sorts of nasty downstream effects on the ISPs connected to them. Considering that trying to flag traffic by country could be foiled by proxies and IP spoofing, and that complex new attacks would easily be able to slip by an IXP-based anti-virus system, all the effort may might be worth it in the long run and simply cause glitches for users trying to watch Netflix or surfing foreign websites to read the news in another language while trying to prevent threats users can easily manage.

So if creating IXP chokepoints would do little to stop the kind of complex attacks for which they’d be needed, why has there been so much talk about the Pentagon treating the internet as a top national security concern and trying to secure networks across America, or at the very least, be on call should anything go wrong? Why is the Secretary of Defense telling businesspeople that he views cybersecurity as the country’s biggest new challenge and has the Air Force on the job? My guess would be that some organizations and businesses simply haven’t been investing the time and attention they needed to be investing in security and now see the DOD as the perfect, cost-effective way to secure their networks, even though they could thwart attacks and counter-hack on their own without getting the military on the case, perhaps not even realizing that they’re giving it a Sisyphean task. If they know they’re targets, the best thing for them to do is to secure their networks and be aggressive about hiring infosec experts, not call in the cavalry and expect it to stop a real threat from materializing since it simply can’t perform such miracles…

Nowadays, if you hack into a company’s servers, the company might hack you right back. No, it won’t wipe your hard drive or infect you with a virus of course. The goal is to figure out who you are and what you’re after, primarily because some of the most advanced hacks over the past few years have been cases of industrial and military espionage. And this is where legal wonks are arguing that the government should step in, lest a company issue a retaliatory cyberattack only to find that its target is actually a foreign intelligence agency. Case in point, Google. After a very sophisticated attack on its servers coming from China and a messy international incident which saw a heated back and forth between the Chinese Communist Party and the company, the tech titan hacked back and found that its attackers were targeting defense and other tech companies with meancingly complex scripts and the group, dubbed the Elderwood Gang, is still at it.

Their easy access to zero day exploits and the coordination equired to pull off their favorite type of attack points to backing from someone who can afford to employ highly skilled programmers and wants to spy on foreign defense and tech contractors, trying to steal blueprints, e-mail, and source code.Basically, what I’m trying to say is that prevailing rumor paints the Elderwood Gang as a part of the Chinese cyber-army long suspected of stealing classified documents from the U.N. and a lot of First World military contractors and government agencies via spyware. As the vast majority of the wired world knows, the United States isn’t exactly a hacking lightweight and it more than likely deploys some very sophisticated spyware and malware of its own. So, say the legal wonks mentioned above, have the Air Force and the NSA tackle sophisticated hackers, not companies that find themselves riddled with foreign spyware. It could’ve come from a Facebook game someone way playing at work and is trying to steal logins to PayPal, or it might be a worm from another government and hacking them back would provoke an international incident which would have to escalate all the way up to the military. But is that a workable approach?

No, not really. Fact is that the vast majority of infections are trying to steal financial information and/or turn your computer into a bot for DDOS attacks. Not only that, but the malware kits used to make viruses and worms are exploitable too. Only a tiny sliver of all the nasty stuff you might catch surfing random sites without some very heavy duty firewalls and strict privacy and browser settings, is actually complex malware from a nation state, and even then you’d have to be a very highly visible defense or tech company since these attacks tend to come from whailing (which is like spear-phishing but targeted to high level executives) and compromised industry message boards, blogs, and forums. Little fries don’t interest the spies much so they quickly lose interest, so it’s really the Lockheed Martins, EADS’, and Northrop Grummans of the world that should be worried, but considering their cozy relationship with the militares of their home states, they can always escalate things when they need to. And since all this is being done in secret, I’d highly doubt that a foreign intelligence agency hacked in retaliation will cry foul. That would just be an admission of guilt and the start of a major diplomatic clusterscrew.

Were we to start reporting hack attempt after hack attempt and infection after infection, we’d so quickly swamp cybersecurity experts at the NSA and the Air Force, that they’d be buried under a massive backlog of things to investigate in weeks while the torrents of reports keep on coming. Antivirus makers already have vast databases that can identify who was infected with what kind of virus and how to remove it running 24/7/365, and can keep up with 99.9% of infections out in the wild. Considering that they’re the primary discoverers of cyber weapons in use, they’re more than up to the job and can do it without defense establishments getting involved in their daily work. And when we take into account the sheer number of random trojans and worms out there, a hacked company has a 99.9% chance of pinging random hacker crews rather than something as threatening as the Elderwood Gang or as sophisticated as Flame or Stuxnet, and even then, no one on the other end will make a peep because doing so would be a lot worse than keeping quiet and let the retaliating businesses get away with it. Treaties and tens of billions in trade may be at stake so it’s best to just let the accusations die down and resume the spying later. So if you get hacked, go ahead and hack back. You’re not going to start any wars by doing it.

Having established why antivirus software can’t really deal with cyberweapon-grade malware, let’s take a look at the really big news in the world of both information security and politics, an official reveal of Stuxnet’s origin as excerpted in the New York Times, and which at this point wasn’t much of a surprise to anyone. The entire web was certain that it was created by the NSA and that the process somehow involved Israel because some of the malware’s critical flow controls were peppered with references to Jewish history and myth. But as the world now acts shocked that what they very vocally and unambiguously suspected actually happened, the contingent of Americans convinced that a cyber attack could cripple the nation’s infrastructure are now waiting for the other shoe to drop. After all, while nations like Iran wouldn’t be able to offer a conventional response to a worm that crippled some of their centrifuges, isn’t creating malware much simpler and just as effective as a couple of bombs, and aren’t there thousands of network and software vulnerabilities to exploit as payback?

Well, if you recall one of my earlier posts on the subject, the second part of that statement is true but the first comes from a massive overestimation of what computers can and can’t do. As noted before, yes there are an amazing number of potential vulnerabilities, or infection vectors if we want to get fancy, but the vectors expose different functionality and far from all of the exposed functionality will actually let you do real damage. There’s a reason why it took a while to write Stuxnet; it had to use several different hacks to get into the right machine, it required expertise not only in how the centrifuges worked, but in how Siemens Step 7 operated and the OB35 data block structure, and finally, needed fake digital certificates to mask its true payload and convince humans to let it out of its sandbox and gain the access it needed to unpack and get to work. In other words, this wasn’t an easy task and by the nature of the beast, the software has to be extremely specialized. Drop any old worm into a control center of a power plant and it’s going to error out and be discovered when a system admin goes over the event log which would more than likely record the errors thrown by the worm during a crash.

Again, I’m sure that Live Free or Die Hard was a fascinating movie, but were it based in the real world instead of a technophobe’s nightmare, the hackers would’ve taken months to gain control of a small local power grid and would’ve spent tens of thousands of dollars at least to test their worms on real equipment they think was being used by the grid they were targeting. Spyware is an entirely different matter altogether and software a lot like Flame is nothing new. In fact, over the last five to seven years, hardly a few months go by without articles mentioning some mysterious spyware attributed to China found on computers of officials in big international organizations, or in a U.S. lab working on national security matters. Does anyone really think that the U.S. isn’t going to spy back or try to gather intelligence on regimes with which it has an antagonistic relationship? True, it is so far the only country known to have used malware as a weapon, but it did so for a subtle act of industrial sabotage rather than a conventional military attack, and acts like this very, very rarely result in war since spying and sabotage are facts of life for nations. In a high profile case there’s a lot of tough talk, a lot of threats, but as soon as the press coverage fades, the saber rattling fades with it as things more or less return to normal.

Contrary to the gripes of many security types, your antivirus software is not useless. Were you turn it off, many routine infections from contaminated websites, that nowadays are more likely to ask you to give to the poor than to pay for a live nude webcam show, would quickly turn your computer into a gold mine for a lazy identity thief armed with simple viruses. Really advanced and powerful malware using zero day exploits, however, will always elude it because that’s the nature of the arms race between virus writers and antivirus makers. Those with the means and motive attack systems and applications, the companies and researchers who discover a security breach either patch the vulnerability if possible, or add a new algorithm to look for the threat signature in the future, such a self-modifying files or local services suddenly trying to open an internet connection. And a piece of malware that slips by the antivirus and doesn’t get reported can work in silence for years, just like the widely reported cyberweapons Stuxnet and Flame did. To explain how these worms went unnoticed, both Ars Technica and Wired, published a self-defensive missive by an antivirus company executive which basically boils down to an admission of defeat when it comes to proactively recognizing sophisticated malware.

Slightly longer version? Some of the most advanced cyberweapons work a lot like typical software and uses a lot of the same tools, or uses legitimate frameworks and packages included in most legitimate software as a launching pad for deploying hidden code designed to act in the sort of malicious ways antivirus would flag as an attack but executed in a way that circumvents the channels through which it would scan. So when Flame is installed, the antivirus checks its components, probably saying to itself "all right, we got what looks like a valid certificate, SQL, SSH, some files encrypted using a standard hashing algorithm… yeah, it all checks out, that’s probably a network monitoring tool of some sort." And herein lies the problem. Start blocking all these tools or preventing their installation and you’re going to cripple perfectly valid applications or make them very difficult to install because every bit of them will have to be approved by the user. How does the user know which piece of software or what DLL is legitimate and which one is not? For the antivirus to help there, it would need to read the decompiled code and make judgments about which behaviors are safe to execute on your machine.

But having an antivirus suite decompile and check the code of every application you run for possible threats is not much of a solution because the decisions it makes are only as good as the judgment of the programmers who wrote it, and because a lot of perfectly legitimate applications have potentially exploitable code in them; a rather unfortunate but very real fact of life. Remember when your antivirus asked you if a program you installed just a couple of minutes ago could access the internet or modify a registry key? Just image being faced with a dialog asking you to decide whether some potentially exploitable function call in one of your programs should be allowed to run or not, faced with the following disassembly snippet to help you make a decision…

Certainly you can see why an antivirus suite that tries to predict malicious behavior, rather than simply watch if something suspicious starts happening on your system, simply wouldn’t be practical. No user, no matter how advanced, wants to view computer-generated flowcharts and disassembly dumps before being able to run a piece of software, and nontechnical users confronted with something like the scary mess above may just turn their computers off and sob quietly as they imagine their machines crawling with viruses, worms, back doors for identify thieves looking for their banking information, and other nightmarish scenarios. Conspiracy theorist after conspiracy theorist would start posting such disassembly dumps to Prison Planet, Rense, and ATS, and portray them as proof that the Illuminati are spying on them through their computers. Unless we want to parse every function call and variable assignment, look into every nook and cranny of every bit of software we’ve ever installed, or write our own operating systems, browsers, and applications, and never using the web, shutting off and physically disconnecting all our modems, we’ll just have to accept that there will always be malware or spyware, and the best we can do is keep our systems patched and basic defenses running.

A while ago, I wrote about the overhyped dangers of cyberattacks and the problems with using them to ruin an opponent’s infrastructure as imagined by doomsayers on Capitol Hill. And while the media is slowly but surely getting closer down to earth about the threat, a proper scholarly rebuke has been published to give the press even more guidance on what cyberwarfare is really like and why it’s not the long anticipated holy grail of asymmetric engagement for rogue nations. Short version? Since there’s a limit on how much an attack would do to a militarily superior enemy, an attacker would have to back up actions in cyberspace with planes, ships, missiles, and even good, old fashioned boots on the ground when a conventional response comes, and the militarily powerful nation states that may be targeted are far from helpless against hackers and malware, and will also launch their own cyberattacks on enemy infrastructure when provoked. So if you really think you could zap a major regional or global player into submission with a virus, you’ll need to rethink your strategy…

As mentioned in previous posts, an advanced energy and transportation infrastructure is huge, and though it’ll have its vulnerabilities, the sheer number of thoroughly researched and tested exploits it would take to impact even a small part of it would be daunting even for a fully fledged hacker army working around the clock. Salvos in a cyber war would rely on the assumption that the discovered vulnerabilities haven’t been patched, many of the targets are exactly what the hackers think they are, and that the exploits won’t be detected long enough for all the viruses to open back doors to critical systems while the IP addresses won’t change until the green light for the attack is given. Oh and once this massive effort is discovered, expect most of the exploits used to get a quick patch, which means that new exploits will have to be found to mount a new attack. Disguising an attack is also getting progressively harder as militaries and intelligence agencies find new ways around obfuscation tools or how to hijack them to trace previously untraceable attacks. And that raises the possibility of cyber war triggering a conventional one if the attack is severe enough or physically hurts the target nation’s civilians.

All that said, there may be an important caveat to consider. Both the academic rebuke and the objections to a lot of popular cyberwarfare gloom and doom address the idea of malware being used as a weapon, just like the Stuxnet virus was thought to have been used. In reality, cyberwars may actually employ spyware like the newly found Flame suite which has silently been infecting computers in the Middle East and North Africa for a few years at least. Rather than trying to crudely bludgeon each other’s infrastructure, nation states seem to be focused on gathering intelligence to better aim diplomatic brawls and conventional strikes. And that makes a great deal of sense. Why huff and puff to shut off a power plant two two after months if not years of painstaking effort when you can precisely identify where and how to carry out a attack, or sneak a peek at what your enemy might be planning behind closed doors? It’s much easier and more effective anyway since you have fairly well known and difficult to patch attack vectors to exploit, vectors like social media, e-mail, or servers which haven’t been properly updated and store easily accessible and weak passwords. Infiltrations can be subtle and last much longer without requiring esoteric knowledge. Unlike we’ve been told so many times, cyberwar won’t get here with a bang but with an insidious whisper, and its main goal won’t be to destroy, but to quietly steal.

The foreign policy wonk blog Best Defense is making the case that we need to turn the inevitable wind-down of cybersecurity hysteria in the media after the news splash made by revelations about the Stuxnet virus, into our permanent attitudes. Basically, the media and politicians are really good at overreacting and forgetting an important issue when its cleansed out of the news cycle, however, we need a balance of both. We have to be aware but not overly paranoid that we're going to get hit with another malicious horror that turns our machines against us. Sounds great but it's kind of vague and cryptic. On a scale of one to ten, with one being completely calm and then being tear-your-hair-out paranoid, how freaked out should we be? A five? A three? A six? While I'm not an expert on guesstimating the appropriate panic levels about security issues, what I can add is that making the kind of malware that can strike real world targets is very, very hard, and we shouldn't be terrified of a viral infestation of our power plants and grids because it takes a lot of time and effort to execute an attack.

Some of the things that really set Stuxnet apart from other viruses was the fact that it targeted specific SCADA machines and showed great familiarity with Siemens Step 7 software at a very low level. And while that made it very scary, it also gives it a very limited potency. This malware is less like a cluster bomb and more like the knife of a surgeon, and like any surgical tool, it's designed with a very specific purpose in mind. Having highly intimate familiarity with another set of software tools designed to control other SCADA machines may require exhaustive rewrites of something Stuxnet-like to the point where we're not even dealing with the same worm, and over the time it will take to develop it, who knows what new security patches will be applied to operating systems and the targeted software? Getting a warm onto a machine isn't such an easy task anymore. With a lot of users very aware if not paranoid about leaving strange files in their junk mail filter and warnings that will pop up every time something potentially compromising happens from operating systems, you have to rely on escalation of privileges attacks and the users' own bad judgment, hence the prevalence of phishing and a bit more elaborate spear phishing attacks to circumvent passwords and user permission managers.

Now, there are obviously other ways of getting into systems which rely on lax security when it comes to a wi-fi connection or just physically spying on what people are doing to get a password or plug in a USB with a viral payload, but the point is that hacking into systems today is like trying to hit a moving target. It's not a trivial task if you encounter even a modicum of what's considered basic security nowadays and the users faithfully keep their machines updated. But that said, industrial machinery is actually updated very infrequently because if it's working, applying an update carries what seems like an unnecessary risk. Even the most reliable vendor will eventually stumble and something will go wrong. So why take a chance, right? That leaves SCADA machines which haven't been patched exposed for a very long time, giving potential attackers a long time window during which they can get very familiar with how the machines work, how they communicate with the software, and at what events communication and be seamlessly altered and sent back to the machinery, triggering it to miss a crucial cycle or exceed some acceptable bound. So essentially, another Stuxnet is possible and big industrial machinery is a likely target, but the next worm will take a while to develop, will target a specific system, and we can thwart it with regular updates and maintaining redundant systems and good security protocols.

While there’s a lot of talk about cyber-warfare, most examples of it are pretty transparent attempts to censor criticism via relatively crude denial of service attacks, read e-mails of political dissidents, and scare those who own sites and forums trying to recruit new terrorists. Despite the sometimes hysterical fears of assaults through an ethernet cable, some of them so extreme they result in ridiculous legislation, there haven’t been any known attempts to take down critical infrastructure nodes by electronic means. Until now. Tech blogs are abuzz about a worm known as Stuxnet, a terrifying piece of malware that targets computers and software that monitor and control day to day activities of industrial complexes and spreads through infected USB drives, or vulnerabilities in Windows-based networks. Once it finds its intended target, it can override alarms critical for real-time monitoring and insert malicious commands of its own. If the age of cyber-warfare is finally here, this worm is its opening salvo and a sign of some very, very menacing things to come in the foreseeable future.

Stuxnet is a fairly complex worm, about half a megabyte in size and written in at least three languages to do its work: an assembly language used in industrial SCADA machines, C, which corresponds well with assembly languages allowing for faster command execution, and C++, an object-oriented version of C. The worm goes to work by taking advantage of how generously Windows systems parse autorun files and at first, presents a perfectly legitimate certificate for Realtek, which makes audio device drivers. Once the system thinks it’s really dealing with a perfectly legitimate file, Stuxnet pulls a switch and installs a malicious library instead. When it’s in a SCADA machine, it listens to the database queries being executed by a very particular software package, a software package called Siemens Step 7, used in power plants, pipelines, and nuclear complexes. Stuxnet seems to be primarily interested in mission-critical OB35 data blocks, required to manage processes that run at vey fast cycles, things like air compressors, centrifuges, and turbines. And at this point, I’m sure you already see where this is going. This is the kind of worm that could cripple real world targets.

The complexity of Stuxnet, the fact that it had a valid certificate, and used as many as four vulnerabilities which haven’t yet been patched, or so-called zero-day vulnerabilities, raised a lot of speculation and several security experts went on the record to say that Stuxnet may be the work of a government-funded lab. After this notion was first floated, some bloggers tried to connect the dots with a WikiLeaks report to come up with one hell of a conspiracy theory which sounds like a Hollywood blockbuster in the making. According to Threat Level, a rare tip from WikiLeaks’ repository mentions a huge accident at the Natanz nuclear facility in Iran. Where most of the Stuxnet infections were detected. After that accident, Iran lost some 800 uranium-enriching centrifuges, and the head of the nation’s nuclear program suddenly left his post. The alleged culprit? Israel. The proof? An obscure quote about the possibility of cyber-attack against Iranian nuclear plants attributed to a former cabinet minister in an Israeli newspaper. All this is at best circumstantial, especially since we don’t really know all the facts of the matter, but this is plenty for conspiracy theorists. In fact, they would probably consider this proof an airtight report since they’ve build elaborate world-domination plots on much, much less.

But is Stuxnet really the first salvo in a real world cyber-war? It was distributed in a scattershot pattern, drifting from infected USB drives, through vulnerable networks, and trying to make its way to a SCADA machine. This would be a great strategy to conceal the source of the attack, but it’s also very messy and doesn’t guarantee a successful infection of the intended target. And while Iran did bear the brunt of the outbreak with some 34,000 cases, Indonesia and India had roughly 10,000 and 5,000 infections respectively, suggesting that whoever or whatever spread Stuxnet had some ties to these nations. Israel would have little of value to gain by infecting a swarm of SCADA machines anywhere but Iran, and could be far more precise in delivering a worm. As far as we know, neither India nor Indonesia have any nuclear deals with Iran or ship it any vital components. Plus, a tip published on the web about a top secret accident at Natanz isn’t proof that a worm was responsible. All we know is that Stuxnet could potentially be used for industrial sabotage, or trigger an accident. not that it actually did it. Without looking at its source code, I couldn’t offer a qualified opinion on its full capabilities, and I would really rather not speculate without examining the worm firsthand.

Finally, we need to consider the charge that to make Stuxnet work would require a nation state’s resources. It is suspicious that it used four zero-day vulnerabilities and had Realtek’s authentication, something that would require its makers to have access to the company’s private key. It would also require that at least one person behind the worm really understood Step 7 and what calls it made to its database. But none of this points to a government entity per se. Private keys can be stolen, there are forums on dark web networks exchanging the newly discovered zero-day vulnerabilities, and if you want a manual on how to program SCADA machines, you can find in depth tutorials with a quick search. In fact, many developers rely on searches to find the right syntax for system-specific and esoteric commands when they get stuck, so I would be very surprised if these tutorials were hard to come by. Same goes with actual SCADA machines. You could certainly buy one or two and run a number of tests to make sure your worm is stable and behaves as it should. Yes, you would run up a bill for a few thousand dollars, but it’s certainly not a prohibitive investment for a small group of people.

So in other words, nothing in this screams about a need for government involvement. That said, Stuxnet does indicate that whoever built it knew a good deal about SCADA systems, understood low-level vulnerabilities in the targeted Windows systems, and had a decent budget. The very fact that it exists should be disconcerting, and points to potential acts of industrial sabotage, espionage, or both, on a level that hasn’t been seen so far. And I could certainly imagine a corporation in control of a Stuxnet strain sabotaging its competitor’s plants, and holding them hostage if it decides to play rough with a particularly reviled executive. Though why Iran had such a huge rate of infections is going to bug security experts for a very long time to come and I’m afraid that until a few intelligence agencies decide to come clean about Iran’s nuclear program, we’ll have little evidence to say anything decisive about it. Meanwhile, we should be looking for, and worrying about, Stuxnet 2.0…