Patching for Industrial Cybersecurity Is a Broken Model

It’s probably not a big surprise, if you stop and think about it, but recent research from Belden Tofino Security confirms that patching is often ineffective in protecting from the multitude of vulnerability disclosures and malware targeting critical infrastructure systems today.

Not patching obviously is not an option, but it’s easy to understand why companies, especially with the emergence of Ethernet as a single network within manufacturing plants, are moving to revamp their industrial automation network infrastructures to make them more bulletproof.

Eric Byres, CTO and vice president of engineering at Tofino Security, investigated the effectiveness of patching for protecting control systems from vulnerability exploits and malware. In a recent press release, he summarized the results of this work and revealed that:

Vulnerabilities existing in SCADA/ICS applications are high, with as many as 1,805 yet to be discovered vulnerabilities existing on some control system computers.

The frequency of patching to address future SCADA/ICS vulnerabilities exceeds the tolerance of most operators for system shutdowns. Most industrial processes operate 24/7 and demand high uptime, and weekly shutdowns for patching are unacceptable.

Even when patches can be installed, they can be problematic. There is a one in 12 chance that any patch will affect the safety or reliability of a control system, and there is a 60 percent failure rate in patches fixing the reported vulnerability in control system products. In addition, patches often require staff with special skills to be present. In many cases, such experts are often not certified for access to safety regulated industrial sites.

Patches are available for less than 50 percent of publically disclosed vulnerabilities.

Many critical infrastructure operators are reluctant to patch as it may degrade service and increase downtime.

In June 2010, the Stuxnet worm triggered a worldwide sensation as the first publicly known root-kit attack targeted at industrial plants. According to Innominate Security Technologies, it infected tens of thousands of PCs, abusing and manipulating Windows-based automation software for its own purposes to ultimately infiltrate malicious code into the controllers of specific real-world industrial installations.

A white paper titled “Post-Stuxnet Industrial Security” from Innominate concludes that after Stuxnet, the threats in automation networks can no longer be ignored. But the real danger is not from Stuxnet itself, but from mutations created by imitators who could circulate other arbitrary, malicious code utilizing the same basic techniques.

Apart from the fact that PCs in industrial use often are not (and cannot be) equipped with antivirus software, Stuxnet has also made it clear that conventional virus scanners do not provide protection against attacks of this caliber. The retrospective analysis of Stuxnet has shown that the worm had been out in the wild unnoticed for at least 12 months before its discovery and had not been detected by antivirus programs during that period for lack of any known signatures for the malware.

I'm surprised there hasn't been more traffic on this post. Perhaps that's an indication of the problem. As the post suggests, security must be structure, not veneer. The notion of a security patch is akin to the notion that you can fix a leak in the basement with a bit of caulking.

As the old saying goes 'it's all fun until someone loses an eye'. Even when a control system performs as intended, there's some chance of safety failures. However, a system cannot be considered safe unless it is rendered immune to external influences yet most integrators and users feel comfortable with a poorly thought out Maginot line of defence. In general, integrators and, worse, control system component suppliers (hardware and software) prefer to be agnostic to security issues expecting someone else to somehow provide an adequate defence - this has got to change. Many are the times I've sat through a presentation for an object/tag oriented controls package where the entire emphasis is on how easy it is to 'see' data and how easy it is to implement changes; so often, object manipulation is devoid of any semblance of change management or even basic validation of parameters: can you configure an unstable condition on a servo axis (what's stopping you)? Can you do it while the equipment is running? How much basic authentication is required?

Very informative Al--great post. I'm not a programmer by any stretch of the imagination but I have wondered why programmers don't incorporate virus protection as an embed to the programs they write. (NOTE: Maybe they do but I'm not aware of it.) We depend upon external programs; i.e. Norton, AVG, Symantec, McAfee, etc. to provide protection but these are not always effective and must be upgraded frequently, sometimes weekly. Also, are there any programs that will interrogate the IP address of the hacker or sender? Again, very informative.

The first step in preventing damage of any kind is to physically prevent acess to the code that is to be protected. A physical, not software, key switch in the write enable circuits makes any changes a lot more difficult, and is a good starting point. If all the adjustable parameters are located in a separate file, on an isolated system, that is fairly secure. Storing the parameters file on another write protected drive is another step toward security. It used to be possible to physically break the write control line on a hard drive, but I don't think that the serial (SATA) interface has that option. So one may be out of luck there.

Of course it is less convenient to have to physically operste a keyswitch to make changes, but compare that with the inconvenience of having a production line or machine destroyed. Remember that security, just like freedom, does not "simply happen". Neither freedom or security is free.

The developer back-doors are often put in because of customer action. Every software developer has been asked by a customer at one point or another how to get into a system where authorized high-level user authentication (I'll whisper the word password) has been forgotten.

The customer asks "I need to change a parameter in our process, but we haven't done it in so long we've forgotten the access codes, can you reset it for us".

The systems integrator can either say "sorry, good security procedures prevent us from doing that because we have no means to grant administrative rights". OR, the integrator can use the back door they've created for exactly this reason.

machinery / systems security is hugely complicated. There is no silver bullet, and complicated is going to mean expensive.

Good points, GeorgeG. I would imagine those "service hatches" drive the IT department crazy. Also, the plant engineers like their uptime and don't want to sacrifice it just to let Microsoft load a patch and reboot.

Banning thumb drives makes sense, Rrietz. There are a bunch of different types of potential attacks. A number of them -- such as worms -- are not necessarily aimed specifically at a plant network, but hit the plant anyway. Others, perhaps more dangerous, would be attacks that deliberate aim at the plant. Of these, the greatest according to plant engineers I've spoken with, are disgruntled former employees.

Integration with factory automation of one sort or another combined with more and more sophisticaed use of networks creates vulnerability. It's not uncommon for the local network to carry real time control functions including servo motor control and process sensor traffic including image streams. Network congestion is also a threat. Control systems are also trending towards data driven / object oriented approaches which moves control from logic to data management, which creates a whole new class of vulnerabilities (while being extrodinarily advantageous in terms of ease of integration and implementing process diagnostics).

But there's also a kind of philosophical disconnect when it comes to security. There are two approaches 1) rigourously validated well documented and published security methods, 2) secret unpublished methods. The latter is espoused by many large software providers and provides a commercial revenue stream for a number of commercial concerns. The problem with the latter approach is that system integrators end up installing security systems whose workings they don't understand. But, getting back to Rob's comments, you then end up with interoperating systems on a factory network that use different, often incompatible and proprietary security schemes. Make no mistake, creating the semblence of security is lucrative business; this is a risk in it's own right: it breeds complacency, yet it's nearly impossible to get a guarantee from an OS provider or a security software provider that you won't be hacked.

Certainly, one valuable commodity to any hacker is data. The more 'routine' traffic that travels far and wide on a factory network, the more data anyone with a sniffer has to work on. Innocuous things like hourly batch status updates can be a gold mine.

User authentication is usually the flimsiest part of a system's defence. MIS and FICS systems in particular generally allow individuals who are not qualified process engineers to intrude into system controls. What factory operators generally need is a secure, factory wide, user authentication scheme, not an equipment specific scheme that is inconsistent and is invisible to corporate HR and corporate security systems. (The next time someone chirps the phrase 'password protected', give them one upside the head). Many's the time that a machine 'problem' has been fixed by reinstalling authorized software or validated recipe data or simply removing 'forces' from PLC code where 'nobody changed anything'.

Factory IT is often the security breaker. While it would be ideal for all critical traffic to be secure authenticated point-to-point communication, it imposes an adminstrative burden that no one, not even corporate IT, wants to take on. Generally, it's a tough job but nobody's going to do it.

You'd might also be surprised to know how many industrial controllers have break-in codes i.e. 'secret' codes that permit access to the very core of the software (typically, some programmer left themselves a service hatch).

1) most software developers and IT geeks and even quite a few integrators are unaware of any of the basic principles of machine safety; as a consequence, a lot of software is developed which isn't just insecure but which violates basic safety standards and regulations. It's not uncommon to find software with 'features' that violate basic 'prevention of unexpected starup, and 'single point of control' precepts.

2) software developers are worst case offenders with unvalidated code releases (including the caustically named just released Friday 5.30 pm version 6.6.6), a cavilier attitude to version control and no respect for change authorization procedures. It's very common to find remote control capabilities that can be asserted without local authorization and without local indication as required by standards. Software guys will sureptitiously install code changes on a machine, altering it's basic behavior, an not alerting machine operators.

3) everyone only suspects the bogeyman when often the threat is quite unintended. Egregious hobling of a distributed control system can be the result of simple network traffic - something as simple as a FICS system capturing batch stats. In one instance, the miscreant was just a central server advertising it's presence on the network by polling every node it could find until they all responded (97% of them had no need of the services offered) - every 200 milliseconds.

4) almost every FICS I've seen relies on it's ability to perform unsecured communication with control systems.

5) many front ends and HMIs on industrial systems use insecure and breakable OSs and, worse, either don't incorporate security defenses or update their security. On older systems, the primary security feature may be that the OS is unexpectedly old, hence obscure.

6) Many system integrators are too cheap and/or inept to even implement properly configured managed switches in their distributed control network (a very basic defence against congestion and means of ensuring maximum latency)

7) many PLC vendors, are security atheists, and rely way too much on obscurity as a defence.

For many years I have been on a free mailing list from Secunia.com During the last week, I counted 11 vulnerability reports on wireless routers hitting my inbox. I did not count Industrial Cyber Security vulnerabilities but the numbers are similar, if not higher. It takes just a few seconds to scan the daily emails and zero in on products you are involved in. Just being aware of the vulnerabilities is a great benefit to me in my job.

If I were in charge, I would totally ban thumb drives on an ICS system and move all data, patches, firmeware updates, etc. through the network where it can at least be scanned for viruses and other 'scumware'.

If a major catastrophe strikes your area, will you be prepared? Do you know how to modify the tech you've already got or MacGyver what you need to fit your own situation? A free, five-day Continuing Education Center course starting April 6 will show you how.

Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.