Cyber security experiment reveals threats to industrial systems

10/04/2013

Share

How practical is it for individual companies to reduce their visibility? How do you do that?

Assante: If you’re web accessible, there are things you can’t do. You can’t hide that fact, but you can reduce the likelihood that somebody is going to correlate what’s there. As a hacker, I can see A, B, C, and D in your system, which leads me to believe that you are this kind of operation and I should use this tool on you.

The first thing you should be doing is looking at yourself and saying, “What am I telling people?” That’s the first thing to understand. Is there a reason I need to make that information available? Is there an operational benefit? If there isn’t, figure out how you can deny that information. Once you do that, stand back and say, “I did the best I could here. Now, what’s the next thing I can do to mitigate the risk?”

It seems that one of the toughest things for asset owners to determine is if they have experienced intrusions. Most companies aren’t going to set up a honey pot or honey net to determine if hackers have broken in or are trying to break in. But aren’t there easier methods? What about canaries?

Assante: A canary is anything that can send up an observable alert if anything happens to it. It can be as simple as putting a computer on a sub-net such that no other computer should ever access. If something touches it, you know that it’s from outside your normal behavior.

Conway: If you have a network that’s using all TCP/IP V4 or all Modbus for normal communication, you can put in a canary with listeners for all other protocols. If anybody talks to it using a different protocol, you know something’s configured wrong or something worse is happening. Another possibility, most medium to large utilities have test networks, and attackers don’t necessarily know that they are in a test network. So many companies are already running a honey net for all practical purposes where they can install some of these canary devices. If somebody is trolling around, he won’t know it’s a test network and the test network doesn’t actually have connectivity to real devices. For an attacker, they look exactly the same as a real system. You should be looking for activity in the test networks, all the time. Use the honey pots that you already have.

Assante: You can find canaries that align with your skill set that you can set up and then watch and listen. You might not be able to do the forensic investigation afterwards, but at least you have a trip wire that says you might have a bigger problem. You can go to your supplier and ask, “Is our system supposed to do that?” That’s a very important capability.

Luallen: When you look at what you’ve got and the resources you have available, there’s a strong incentive to avoid having to deploy additional equipment. This isn’t a skill that you can just throw on to all your existing personnel without additional investments of training and time. When you look at the range of tools that you might put in place, it’s important to realize what you already have. What kinds of skills and tools are already there so you don’t have to put in more systems and be able to manage them. The canary model is great to look for traffic that shouldn’t be there, but to know what shouldn’t be there, you need to know what should be there. That means knowing what you already have and how it communicates. Go down to the grass roots: What do I have and how do those things talk to each other? If you do put in a canary, what are you going to do when it detects something?

Assante: When you’re getting a new control system or you have come to a new situation with an existing control system, you have to establish your base lines. How does this work? What is required for it to work? What is spurious or unnecessary? You should be able to get this from your supplier, particularly during the procurement phase. There are tools available, like the SOPHIA tool from Idaho National Labs, that are designed to passively baseline your communications at the port and channel level. You have to build a profile of the system and then you can tell when there’s a deviation. Most deviations are misconfigurations or somebody making a change in settings, but you still need to do something about it. You have to run it down and find out why it changed. That requires an investment in time and resources.

Luallen: You have to know what you don’t need. When somebody buys a new control system, during the procurement they list all the functionality they need. By the time it gets on site, it has all sorts of other functionality. You have to ask your supplier what’s in there that you don’t need. Anything that’s in there, even if you don’t use it, has to be secured and maintained. There’s a major supplier of panel-based HMIs that is now including Adobe Reader in all its products. This is a horizontal application that has had vulnerabilities, and it will be in a situation where the user may not know it’s there and there is little chance it will be patched. Unless you have a very good reason why you need it, take it off.

So, ultimately, was this test a good idea?

Assante: I applaud the project in that we have very few learning opportunities in the industrial control system space. We have to learn what’s going on and then use that to determine how we defend these systems. Honey pots are good because the people owning the system don’t mind sharing what happened. We have to share it in enough detail that we can extract some lessons learned.

I did find it to be very informative. It will affect our future control system specifications.

Anonymous , 11/13/13 05:55 PM:

I is interesting that Time Magazine has a cover story on the Nov. 11, 2013 issue informing its readers of the "Dark Web" (TOR Network). Whereas your Nov. issue details the hacking tools that have evolved some four years later. Good reporting on what is happening NOW.