Archives for September 2009

For the past few years there has been a rise of cyber criminals attacking systems for profit. Many of the financially motivated attacks, like the TJX breach, have been well published. It appears that as attackers learn how to profit from their exploits, their illegal activities tend to increase as well. An attacker may first start simply by compromising a system but then may move to stealing personal information from the user, information can be sold. Once the attacker has compromised enough systems to create a bot net, he can rent them out.

We have not seen, as far as I am aware of, any financially motivated attacks on control systems yet. An interesting example of a financial based cyber attack on energy companies may use data from the control network without actually effecting the control network. Traders within a company may use data from a historian, such as PI, to assist in the decision making process regarding when to buy energy from other energy producers and when to sell excess energy. A historian that is shared with the trade group is typically setup in a DMZ between the corporate network and the control network. If an attacker gained access to the historian, he could view the same data the traders use to make their trade decisions.

A patient attacker could watch what occurs on the historian and look for trades occurring. The attacker could then use the information to make their own purchases or he could begin manipulating the data the traders see in order to alter the choices the traders will make. The attacker could manipulate the data so traders purchase power and the wrong time or purchase from a particular energy producer. Two motivations for an attacker to target the energy trading are corporate sabotage and profit. A competing corporation could influence the target company’s decision making ability forcing them to make bad choices, hurting the company financially. The second, and more likely reason, would be to make a profit off of the companies purchases and sales, either by buying stock in certain companies or shorting a company.

This attack require a great deal of knowledge of a companies trading habits. An attacker would have to spend a lot of time on the network or possibly have insider knowledge. While I think it would be an interesting attack, the amount of time necessary to learn the system would prevent it from being common. It may be awhile before we see anything like this in the SCADA sector as there are so many other places for an attacker to look in order to make quick money.

Normally we would not comment on a marketing press release, but this is Cisco and even a marketing effort from a giant like that can have a big impact.

Recently Cisco announced that their services group announced grid security services, hat tip: Matt Franz, @frednecksec. These services included cyber and physical security services and even mentioned compliance services. In my admittedly dated Cisco experience, their services were primarily focused on selling more products either directly or by making it easier for customers to understand and integrate their products.

For example, SCADA, Supervisory Control And Data Acquisition, requires its own firewall service for intrusion protection. Cisco can offer firewalls specific to those protocols, allowing the utility to better manage access to different information, Lasser-Raab said.

The bigger win for Cisco is not security products or services, but infrastructure products. One of the interesting things to learn when we perform assessments at field sites is whether they believe Cisco equipment is ruggedized enough to survive in that environment. Many say it is and have deployed systems without problems. Others say it is not, and they go for a more industrial solution like those from RuggedCom. If Cisco can get more people speaking the language and understanding the concerns of the control system market perhaps they can make a more convincing argument.

I was out at EnergySec in Seattle last week, and tweeted on it @digitalbond.

An INL presentation showed that they have found about 325 vulns in the control system assessments they have performed over the last four years. This revived my long held and stated frustration about who gets this information. When INL does a vendor assessment, it is frequently paid for in part or in full by the US Government. Your tax dollars at work.

INL signs an agreement with the vendor being tested that the results will only be shared with the vendor and the sponsoring USG agency. So the vendor has sole authority on what is done with discovered vulnerabilities. Some have chosen to address the vulns, and then provide the full report along with the fixes or corrective actions to their customers under NDA. We know this because they have INL provide the full report directly. Bravo.

However some have chosen to provide the positive excerpts or highlights of the report and remove any vulns or problems they do not intend to fix. Even worse, they can say their system has undergone INL testing giving it some implied certification.

Other vendors have chosen to fix problems in their systems, but not tell the customers about the security problems or corrections – – the dreaded silent fix. Owner/operators using the system often choose not to upgrade to a new version absent a compelling reason. So key security fixes are not factored in the upgrade decision.

Allowing the vendor to have sole authority of how the results are shared may have been necessary when the program started, although even this is debatable, but now the INL test program carries so much weight with potential and existing customers that INL / USG have more negotiating clout. They could require more information sharing with the affected customers. For example, the vendor could be given six months to address all findings or develop compensating controls before the vendor or USG must share the information with affected users.

Is an owner/operator better off knowing about an unfixed vuln, or is it better to keep this information only within a small sphere in the vendor? After all, the more people who know something, the more likely the information will leak out to bad actors. I would argue that an owner/operator needs to know the information. If INL could find a vuln, then others with access to the system could as well. As an owner/operator I want to know the vulns so I can make risk decisions on compensating controls and my use of the system. I have yet to see a vuln that does not have some compensating control, so don’t tell me you are keeping information from me for my own good. Owner/operators are not children.

So why is the information not being shared with affected owner/operators? I really don’t know. In discussions over the years I’m convinced that the INL researchers want it shared. After all, who wants to keep their results from those it could help? I’m also convinced the government wants to share it. The vendors don’t want to cede control over the results, but again the program has so much clout now that some delayed disclosure is possible.

I don’t know why we are stuck in this vendor discretion model, but my best guess is the financial and legal people that run INL are the impediment. Changing the status quo has no legal or financial benefit to the lab, and they can argue that it exposes them to an increased legal and financial risk. With no benefit and even a small risk, why rock the boat? Absent some pressure from the USG, I see no chance for change. Maybe at the next Congressional hearing some panelist can ask why a National Lab is not being required to share known vulnerability information with affected critical asset owners when the work was paid for by the USG. As much as I grouse about Congress getting involved in control system security, it may be a call from the right Senator or House member that is the only thing that can change this model.

I’m out at EnergySec in Seattle and gave a 1 hour presentation yesterday on our Bandolier, Portaledge and Quickdraw presentation.

Our approach to control system security research is to extend existing tools and applications in two ways.

1. Add control system intelligence to existing IT security tools.

Bandolier extends the the popular Nessus security scanner to audit the hundreds of security settings in a control system component against an optimal security profile. Quickdraw extends the Snort network IDS to understand and decode control system protocols such as EtherNet/IP, DNP3, ECOM and Modbus TCP. We developed Snort preprocessors and plugins that can be used in Quickdraw, for IDS/IPS signatures, and also for application intelligence in a field firewall.

We make all of our research available via subscriber access that costs $100/year. For that price your company download and use all of the tools wherever you want. The fee is not a money maker. It actually is designed to limit support costs to serious industry players. You would be amazed at how this small subscription fee reduces support costs and whittles down requests.

Recently I was called by a major news organization who I understand has been calling many in the control system community for a potential story. He was hunting for an unreported, clear and vivid example of a successful cyber attack on a critical infrastructure control system that had serious consequences to build their story around. All they needed was one titillating attack to make the story, and evidently this had proven to be difficult.

My question to him and loyal blog readers is what if you find this example? What are we suppose to draw from this? One of a large number of complex systems highly reliant on IT hardware and software was compromised. I’m shocked! This would likely be true even if we did a great job of cyber security in control systems.

There is a story on how vulnerable control systems are, but until actual threats and compromises take place it does not have the required sizzle and requires to much technical detail for the average reader.

Now if the news organization showed a significant and growing number of attacks targeting critical infrastructure control systems that would be interesting and valuable. Or investigations showing bad actors studying control systems. Or … The community is in desperate need of credible threat information and then the stories to drive it home to the decision makers.

Being one of the people who tends to be more interested in the pointy end of the security stick, I’ve been looking forward to this training material being released since I first heard of it several months ago. The good folks over at Offensive Security have put together a great training course, and the base material (everything except the videos and a pdf with all the info) is free, though donations to HFC’s food program are encouraged.

So that means a great tutorial on how to use exploits right? A little, but that isn’t really where the strength of the course or metasploit lies. The collection of exploits is great and lets you quickly demonstrate the real risk that unmitigated vulnerabilities could have, but lots of tools/scripts do that. Its the framework itself, the libraries, meterpreter, the scripts, the community and the way it lets a researcher/penetration tester leverage all those to move through the phases of discovery and exploitation and post exploitation much more quickly, with less ad-hoc/one off methods and with a huge amount of flexibility and extensibility evidenced by the number of tools built on and with it. But if you haven’t been using the framework for a while it can feel a little like jumping into the deep end. Thats where this material comes in, taking you through example after example of how to use the framework for everything from fuzzing to exploit development.

That said, this isn’t material for a beginner, if you aren’t familiar with the basic principles and techniques of exploitation already, you probably should skim it over to wet your appetite, bookmark it and come back later when you’ve got some more knowledge and experience. To get that you have a few options to choose from, including our own Advanced Control System Testing class, offered at S4 and and on client sites throughout the year, that focuses of fuzz/negative testing and exploit development in control system protocols and components, various other classes (pick one with lots of lab work/practical demos or you’re wasting your time), and a huge amount of material online.

A nearly year old Chinese academic paper got a bunch of publicity in recent weeks as Newscientist spotlighted the paper and noted the gist “Cascade-based attack vulnerability on the US power grid.” With a title like that it was bound to cause a stir. A pair of researchers at the Dalian University of Technology were able to garner sufficient information from publicly available data (can you say Google?) to model how to create cascading power failures for the west coast of the United States, cascading failures similar to the big NE blackout of 2003.

As one who has searched for and found reams of similar data my initial reaction was “that is no surprise.” Many voices in our industry have noted that by digging deep into the internet one can find just about every piece of information imaginable about our power grid. Information running the gamut in granularity from the general topology and interconnects of a regional ISO to the topology, IP addresses, makes, model numbers and names of the field devices at a generation facility. It is all out there.

Granted, many asset owners have performed a large amount of “sanitization and cleanup” on the information that they have released in the past. Some even going so far as to seek redress in the courts, to remove their proprietary data from the eyes of the public. Many (but not all) asset owners are now very good about minimizing data released to the public about critical assets. Sadly, once the genie is out of the bottle it is impossible to put back in. Old data and new about the power grid is still plentiful to those willing to dig a bit.

As the dissemination of data is impossible to undo, mitigating the impact of open source analysis becomes a two fold challenge of both physical and cyber nature. Common practice countermeasures in the cyber realm such as; defense in depth, perimeter hardening and even isolating plant and distribution networks go along way in reducing cyber risks. Barriers, locks, and monitoring, the old “guns, gates and guards” approach bolsters physical security.

As these mitigation are deployed it seems best to not broadcast the changes to every possible outlet. A little discretion in what we reveal about critical assets makes the job of creating an attack plan for cascading failures that much more difficult.

Someone needs to tell me where the downside is with products like CoreTrace Bouncer. I’ve tried to be skeptical of application whitelisting but the more I see, the more I like it. Recently I had the opportunity to see Bouncer demonstrated on a Yokogowa Centum DCS. I’ve seen lab demo’s before but this was the first time I had seen it in the context of control system servers. My overall impression: this is an elegant and effective solution to some of the security challenges we face with Windows servers and workstations in control systems.

Perhaps the simplest definition for application whitelisting is that you allow the known good programs to run and nothing else. Blacklisting, on the other hand, is the traditional approach where you allow everything to run and then identify known bad things using anti-malware software. The trouble with blacklisting is that the list is moving and ever-expanding. For some additional perspective on this, see #2 in Marcus Ranum’s “The Six Dumbest Ideas in Computer Security”. One of my favorite quotes from that article:

Why is “Enumerating Badness” a dumb idea? It’s a dumb idea because sometime around 1992 the amount of Badness in the Internet began to vastly outweigh the amount of Goodness.

Whitelisting skepticism has always centered around management and performance. How do I allow people to do their job and is having to check the whitelist every time I click on myprogram.exe going to slow things down? With Bouncer, there are a number of ways to allow people or programs to make changes by designating them as trusted entities. This is critical in enterprise networks but because of the deterministic nature of most control systems, I think it’s less of an issue. As far as performance, the magic is that the core whitelisting part of Bouncer loads in the kernel. It’s not, for example, modifying the actual ACLs of hundreds of binaries in the system – it controls their execution from a lower level.

So let’s break it down: we have Windows machines in control environments that are difficult to patch. Delivery of AV signatures, not to mention the overhead of running AV to begin with, is also painful. Finally, the long life cycle of these systems means we’re dealing with old OS versions in many cases. Introduce Bouncer, a solution that defeats malware to the point that patching may be irrelevant and it works with Windows versions back to NT 4.0. CoreTrace admits they didn’t go out targeting our market but it’s easy to see why over half their new clients this year are utilities and other organizations trying to solve control system security issues.

If you have NERC CIP responsibility, some light bulbs are probably going off about now. Can I deploy a product like Bouncer and not have to do AV updates and patches? The CEO of Encari (Matthew Luallen) and the Midwest-ISO chairman (Paul Feldman) make a case for meeting “both the spirit and letter of the law” in this whitepaper: Malicious Software Prevention for NERC CIP-007 Compliance. The case is pretty clear for anti-malware. For patching it may at least buy you some time as a compensating control. The Luallen/Felman paper says this regarding CIP-007 R3:

By preventing the execution of malware — including those that are deposited via vulnerabilities that haven’t been patched or via memory-based attacks like DLL injections — application whitelisting is a compensating control until the PCS vendor approved security patches are installed during regular maintenance windows.

Incidentally, the Emerson Process Management group put their vote in for whitelisting by including Bouncer as the anti-malware component of their Ovation Security Center product.

Between doubts about the effectiveness of anti-virus and the current security and compliance challenges faced in control systems, there are some compelling reasons to have a look at application whitelisting.

(Full Disclosure: CoreTrace has been an advertiser on the Digital Bond site)

I’m out at the OSIsoft T&D Users Group in Portland this week. Transpara, one of the OSIsoft partners, is showing PI displays sent to Blackberries, iPhones and other mobile devices. People were walking up with their phones and getting demo’s right on their phones. Essentially you navigate to a web page on a web server and download your display, after authenticating and having the required authorization.

Transpara is taking the position that the connectivity and security of access to the web server is the end user’s decision. You can require VPN’s, strong authentication, … You can leverage integration with the Blackberry Enterprise Server and Active Directory. All Transpara cares about is that you can get to their web server somewhere on the asset owner’s network, which gets process data and kpi from PI.

Now I know there will be a tendency for many to say this is terrible – – process data sent over the Internet and mobile networks. Others will say this is cool and I need it now.

The real answer is it depends.

Is there a true business need for this information to be available anytime/anywhere on a mobile device?

What would be the impact if the data is disclosed? Does it have short term value? Long term value? Is it business sensitive? Would it aid an attacker if known?

What is the likelihood of compromise? Where is the web server located? What is the security between the mobile device and the web server? What is the security of the mobile device?

We would focus on the true need for this info and the impact if this info is lost. We then would see if security controls could be put in place to reduce the risk to an acceptable level.

Dale's Tweets

About Us

Digital Bond was founded in 1998 and performed our first control system security assessment in the year 2000. Over the last sixteen years we have helped many asset owners and vendors improve the security and reliability of their ICS, and our S4 events are an opportunity for technical experts and thought leaders to connect and move the ICS community forward.