Posted
by
timothy
on Thursday May 19, 2011 @02:59PM
from the but-motel-6-keeps-the-lights-on dept.

alphadogg writes "A planned presentation on security vulnerabilities in Siemens industrial control systems was pulled Wednesday over worries that the information in the talk was too dangerous to be released. Independent security researcher Brian Meixell and Dillon Beresford, with NSS Labs, had been planning to talk Wednesday at a Dallas security conference about problems in Siemens PLC systems, the industrial computers widely used to open and shut valves on factory floors and power plants, control centrifuges, and even operate systems on warships. But the researchers decided to pull the talk at the last minute after Siemens and the US Department of Homeland Security pointed out the possible scope of the problem."

Perfect example of security through obscurity. Yeah, everyday script kiddies won't be messing around in the systems, but those dedicated to do damage or spy have the time and means to get to know the systems. And it's even easier for them because the systems aren't properly secured.

The Iranians didn't find out about the obscure nature of PLC, they found out it isn't a good idea to buy your infrastructure from foreign countries... See in the U.S. we are careful to only use... oh nevermind.

The high ticket projects all attract multinational corporations. Those corporations aren't shy about buying smaller-scale operations with technology they want. Even if you do use technology developed only in your own country, it is not sold elsewhere? Are there vulnerable systems anywhere within the local technology entity? Even if they've got 10 vulnerabilities instead of 100,000+ they're still vulnerable.

I've seen one region that didn't have any kind of electronic or software vulnerability whatsoever.

At this point it doesn't really matter so much who developed it. Regardless, we're still potentially collateral damage or potential targets of a fully disassembled/reverse-engineered/built-fresh-with-a new-twist version as well as whatever the original authors might unleash. Whoever made it was shortsighted if they felt that even versions attempting to be very specific wouldn't be analyzed and modified or cause some collateral damage as-is. Pruning target-filtering code seems it would be a relatively tr

Iran was just a test case so that we could demonstrate just how vulnerable these things are and secure proper funding to lock ours down.

Security dude: "Some hostile country could sneak somebody in and sabotage our power plants with nothing more than malicious software!"Congress critter: "OK, I don't really get what you're saying, but let's assume that you're right in theory. We've never seen an attack like that any where in the world - Why worry? Besides I want to put in a giant duck pond and name it afte

That's not the bit that scares me the most. The bit that scares me the most is that anyone with an ounce of skill in reverse engineering can identify the security flaws used, and anyone with an ounce of skill in assembly can disassemble Stuxnet, alter what it targets, and launch the new variant.

By banning the talk, the DHS is preventing US industries from protecting themselves against economic warfare. Plenty of nations (China and Russia especially) are investing in cyber-warfare. There's plenty of amateurs

That's not the bit that scares me the most. The bit that scares me the most is that anyone with an ounce of skill in reverse engineering can identify the security flaws used, and anyone with an ounce of skill in assembly can disassemble Stuxnet, alter what it targets, and launch the new variant.

By banning the talk, the DHS is preventing US industries from protecting themselves against economic warfare. Plenty of nations (China and Russia especially) are investing in cyber-warfare. There's plenty of amateurs out there with axes (albeit often as delusionary as the DHS') to grind. It is simply not excusable for the US to be placed in this kind of danger.

Which is exactly why I really hope these researchers will present their findings to Siemens engineers so that the problems can be patched, and then give a talk about it. The stakes are pretty high with these systems, so hopefully a real fix will augment security via obscurity in this case.

Because if these researchers acting in a more or less intellectual manner found them, it is safe to assume that individuals without such a noble goal in mind will find and possibly exploit them. Releasing the information to Siemens first would hopefully prolong the search for the "bad guys", by getting rid of some potential vulnerabilities.

The whole industry is riddled with massive holes because we're all tied to legacy OPC which relies on that massive dogs breakfast called DCOM. The slow adoption of OPC UA and even OPC WCF keeps the whole industry in a situation where it is easier to disable all security than deal with DCOM. Which makes the siemens issue too easy to exploit. Every single bloody version of windows has a different way of being configured, so no one bothers to do it right...

Did you RTFA? That's exactly why they decided not to give the talk, because Siemens hasn't fixed the problems.
As NSS Ceo Rick Moy points out:

"The vendor had proposed a fix that turned out not to work, and we felt it would be potentially very negative to the public if information was put out without mitigation being available."... In the past, technology companies have threatened legal action against researchers, but Moy said that in this case the lawyers were not involved. "It's a temporary hold on the information; it's not that it's being buried," he said. "We just don't want to release it without mitigation being out there for the owners and operators of the SCADA equipment."

At my workplace, all our PLCs are on a process control network. It is isolated from the business network and internet completely. We assume that the PLCs are not secure and they are business critical. We can't take any chance a malware outbreak or hacker causes actual physical things to happen.

It makes doing work more difficult, and there are still some attack vectors.

This was perfectly viable 10-15 years ago. Nowaday, the requirement for data archiving, process data historian, plant floor management, etc... make it almost impossible to have a true, complete isolated process network. You always end up having a dual-homing computer or firewall somewhere on that network. Therefore, a potential hole.

Depends on the design. Properly designed setups will have an air-gap and only data transfer via sneakernet in the form of a hard-disk or similar coming from the SCADA to the corporate systems. Real-time's desirable- but for some networks, having the hole's too much of a risk- especially if you've got a Windows based HMI system or similar in the mix. Seriously.

I work on a PLC system that has a single ethernet TX pair to the rest of our network. It transmits stats blindly (with the help of a static entry in its ARP table) to a PC on the outside where a small program listens and collates data. I've heard of similar things with serial, fiber and radio modems,etc.

Do you audit it often to make sure it's still air-gapped like you think it is? Many of the audits at power utilities where they had the same thinking had pro-sumer routers or switches tying the networks together that were done in a pinch for some ease of deployment thing or ease of use thing and then got forgotten.

Certainly audits are a good thing, but we mustn't forget that we're talking about something that gets in and hides itself well, even deleting itself from some hardware along the way. An audit of hardware still only gives a snapshot in time. That laptop that was briefly plugged in, or machine that briefly had a USB key plugged in, may be long gone. Intrusion detection can help, but with things like traffic to a PLC using the normal ports, it may take deep inspection of every packet to see what's going on,

As it should be. But isolation does not require a complete elimination of remote monitoring. Our Process Control Network has a Server on it, which via a hardware firewall pumps data one way to another machine outside which emulates the view of the process network. This basically gives us complete remote monitoring without the ability to send data back to the network.

It makes it easy and there are few if any attack vectors, and when malware spreads around the business network (frequent) it so far has never m

At my workplace, all our PLCs are on a process control network. It is isolated from the business network and internet completely.

You are utterly kidding yourself if you think that your PLC network is "isolated".
Does anyone ever request data from it? How do you transfer the data...with a USB key maybe?
How are the controllers programmed? With a workstation that is plugged into which network...and never the internet?
I would strongly suggest that you read up a bit on stuxnet. The details may blow your mind...

At my workplace, all our PLCs are on a process control network. It is isolated from the business network and internet completely.

You are utterly kidding yourself if you think that your PLC network is "isolated".

It's not difficult.

1. decide what parameters are going to be reported from the secure system to the outside world and how frequently. Say "widget-count" and "sprocket-temperature", three sprocket temperatures per widget count.

There is a notion in security engineering of responsible disclosure, which is letting a company know about a vulnerability long enough before you present it so as to allow the company to fix it and deploy the fix. I believe that what happened here was that the company complained that they did not have enough time to fix the problem and deploy the fix, and that DHS and the researcher agreed with that conclusion. I do not think this is terribly far fetched, and I doubt that there is a conspiracy to leave vulnerabilities in industrial equipment used here in America, not when the Iranians want to get back at the US and Israel for Stuxnet.

Perfect example of security through obscurity. Yeah, everyday script kiddies won't be messing around in the systems, but those dedicated to do damage or spy have the time and means to get to know the systems. And it's even easier for them because the systems aren't properly secured.

I'll be at work for a few more hours. In my living room at home there is a suitcase with a lot of cash in it. I didn't lock my front door, I didn't even close it. I won't tell you where I live. Security through obscurity.

1) it doesn't matter what secret *I* don't know about the location of the suitcase, what matters is whether *you've* been under surveillance for the last couple of weeks. If so, your suitcase is already gone by the time you go back home.

Not everybody gets hacked, but if it's a juicy target the attack is going to be properly organized and when a vulnerability window appears for a few hours, it will get used.

1) it doesn't matter what secret *I* don't know about the location of the suitcase...

Yes it does. You need information in order to actually pull it off.

Not everybody gets hacked, but if it's a juicy target the attack is going to be properly organized and when a vulnerability window appears for a few hours, it will get used.

All you are really saying here is that there is no such thing as security because nothing can be protected against an attack by an entity with infinite energy and resources.

2) even if I don't know the location of your suitcase full of money, I can break into all my neighbours' places and steal their suitcases full of money

Right, my obscured info is protecting me.

If a vulnerability exists on one person's computer, then that vulnerability exists on all the computers throughout the world which use the same OS and relevant settings. The bad guys don't need to hack *your* computer, they only need to hack *some* computer.

Sure. However, here's another way of saying it: If they know you have a vulnerability, they can get in. You are right, though, in that they are saved the trip over there to find out about it.

All you are really saying here is that there is no such thing as security
because nothing can be protected against an attack by an entity with infinite
energy and resources.

Correct in a sense. The analogy I'd use is the lottery: Pick any one person you like, and their chance of winning is zero. But the chance that someone will win is close to 1. It's correct for a single person to assume they will not win, but it's incorrect for the lottery organisers to assume that they will not have to pay out the

Did you RTFA? They're waiting for Siemens to fix the issues first, a common practice in security research. Siemens and DHS didn't force them to pull the talk and didn't even get lawyers involved. So please stop with your accusations. You clearly lack an understanding of the situation at hand.

What is being argued is that Siemens did not have enough time to patch this vulnerability and deploy that patch in major installations of these systems. I do not doubt it; the real question is whether or not they are busy deploying a fix, and I would not doubt that they are. Stuxnet is out there being studied by people who would use it to attack US factories, if they could, and I would bet that the US government is putting pressure on Siemens to fix the problem. If within a year, the talk is still being

> The argument that some knowledge is too dangerous to know is specious and flawed.

That's not the reasoning given. The knowledge IS known. Some knowledge is dangerous to disseminate. This is a sad fact of humanity, but a fact. Given opportunity and knowledge of vulnerability, you will get attempts to use and abuse knowledge with similar results. People are eager to exercise their imagination and reluctant to exercise restraint or critical thought. I can understand their position.

> The argument that some knowledge is too dangerous to know is specious and flawed.

That's not the reasoning given. The knowledge IS known. Some knowledge is dangerous to disseminate. This is a sad fact of humanity, but a fact. Given opportunity and knowledge of vulnerability, you will get attempts to use and abuse knowledge with similar results. People are eager to exercise their imagination and reluctant to exercise restraint or critical thought. I can understand their position.

Thank you for replying instead of simply down-modding an argument you don't agree with. Others seem to prefer retaliation to debate.

Let's look at this from another perspective. Everyone knows there are problems with Siemens' PLCs. That's been known since Stuxnet got reverse engineered. While there's no problem whatsoever with sharing the information about specific vulnerabilities with Siemens - indeed, making sure they're the among the first to know - what additional danger would be presented by sharing tha

I used to work in provisioning in a telco and it entirely depends on who's managing the plant. We'd install circuits in some power plants that were so strict that they insisted on fiber use only. We'd run copper to an access point outside their security perimeter then have a mux convert it to fiber to run across the perimeter into the facility where it would terminate in an outer building. Their security plan did not allow ANY outside network connections to the plant itself. They had networked equipment but it was all housed in an outer building with no connection to the main plant or control systems. They refused to allow copper on the premises because it's relatively easy to splice into and carry elsewhere. Fiber would be much more difficult to splice and bring in.

Other facilities were less secure. I remember getting a panicked call from someone shouting "The Damns gonna bust!!!" They had a single "Circuit" they paid about $20 a month for that was nothing more that a single copper that ran from some building to the local damn. They'd apply +5 volts to the line to open the damn, and -5volts and it would close. They'd reacted too slowly to rising waters and it had flooded the copper pair they used to control the damn. They wanted us to send a phone tech into their overflowing damn to repair the circuit so they could open it from the safety of their administrative building. They had a hard time understanding my near hysterical laughter.

Heh... All it takes is a bit more effort- but it'd be a bit more obvious to pop a passive tap in a fiber run since they're not small. Sadly, it's not sound thinking all the same. The attackers are as likely to attack the end-nodes of the system where the security is much, much weaker and there's copper to be compromised before it gets to the fiber loops. You can do as much or more damage by dinking with a substation's setup as with the generation

You guys are way over thinking this. There is no connection to the outside world by the control equipment. The fiber that came in terminated in buildings outside what would be considered the power plant. I'm not sure what they used it for... likely they could measure data there or something. What the fiber was supposed to prevent was local staff getting bored and running their own bootleg connection into the building so they could watch porn on their critical workstations inside. Anyone on slashdot could pi

At our workplace an attacker would need to get through a firewall,... another firewall,... and another firewall as they work their way through the business network, the information network, down to the process control network. That last firewall is a doozy too, one way communication between 2 computers only.

But the researchers decided to pull the talk at the last minute after Siemens and the US Department of Homeland Security pointed out the possible scope of the problem."

Don't you mean the DHS told them not to do it or they would get a thorough anal probing in the airport security check on their way out of town. I'm pretty sure they understood the "scope of the problem" before they started doing the research (which was also probably the motivation for the research).

First of all, don't you realize every time you make a joke about "anal probes" at the airport, you're being not-so-subtly homophobic? Same thing with prison-rape jokes. I'm about as much a fan of those jokes as I am of the acts.

Didn't you read the part where the DHS CERT (a part of US-CERT, which falls under DHS but has nothing to do with the TSA...) told NSS something like, "Um, guys, the patch Siemens released doesn't work, and there are thousands of these devices deployed all over the place, includ

...doesn't the existance of a virus that can attack such devices make this a zero-day flaw? The hack is public, since anyone can disassemble the virus that's in the wild and see how it works.

And, frankly, I don't see it being awfully difficult for any Black Hat with a mind to to rip out the prior payload and install one that can attack a wider range of devices. Surely it is in the interests of security for corporations to understand what they can do to mitigate the risk of this.

A lot of people seem to want to scream about censorship, but they're missing the point. This is one of the best case scenarios I've seen in relations between companies and security researchers.

For those who can't be bothered to RTFA, here's a summary.

Researchers found a serious flaw. The company developed a fix. It turned out that the fix was flawed. The company told the researchers about the potential impact of giving the talk before the flaw was fixed, and the researchers voluntarily postponed the talk while a better fix is built.

That's it, and it looks like everybody did the best thing they could. Isn't this what we'd want Siemens to do? "You've got a right to give your talk, but we'd like you to postpone it. Here's why. Your call."

The info exists. The info is valuable to people who want to do something bad. Valuable information will find a supplyer, provided the demand (and pay) is high enough.

People who want information for nefarious reasons don't care about legal troubles connected with the acquisition of said information. People who want information to prevent said nefarious actions usually cannot ignore the law when trying to get it.

Question for 100: Who will now that this talk is not being held have the information, and who will

Question for 100: Who will now that this talk is not being held have the information, and who will not have it?

The same (good and bad) people have it without the talk, but the rest of the world does not. Although the risk level is still there, it's not increased. If TFA is correct and Siemens is working on a fix, then what's wrong with giving them the time they need and/or working with them?

I have a hard time believing that it took siemens this long to develop a fix. The fact that stuxnet was designed to compromise siemens PLCs and how it accomplished this has been known for several months now. There's no excuse not to push out a (working) patch within a few months of a huge 0-day being discovered. To have not fixed this by now, especially given the critical applications some PLCs are used in, suggests negligence.Responsible disclosure says that you should give the responsible party a reasonab

So it would decrease security to give that information to people who pay for a sec talk, people who are most likely sent there by companies, companies possibly that use the technology in question?

Let's think for a while: Someone who wants to blow up a dam or nuke a power plant probably doesn't really care too much about "virtual trespassing", aka hacking and the legal implications thereof, and neither would he bother to second guess spending some 1000 bucks on someone who would provide this information, whi

By not conducting the talk, the risk level is not increased, at least. It's still there, obviously. The companies that would have attended this talk should already be working on isolating SCADA and similar systems as much as possible, with or without the specifics of the talk. I doubt companies are going to be patching SCADA systems themselves without help from Siemens or their vendor. If Siemens is indeed honestly working on a solution, then _delaying_ the talk is entirely reasonable.