Sorry to run with the Triangle theme, but Jason's talk really illustrates the third point of control systems insecurity.

First, we had Stuxnet, which showed that PLCs have no process control integrity. That is, a PLC can modify process control data as represented to the HMI. Without using a tool like Langner's Control Integrity Checker, you can't really know whether the logic your PLC is running is the logic you're expecting (thus, you can't know if you've had this attack performed against you). The downsides to using the CIC are that a PLC rootkit could hide malicious ladder logic from the checker, and of course CIC has to be implemented for your PLC if your PLC is not made by Siemens.

I gave a talk two months or so ago at the Embedded Device Security Conference entitled 'Hacking Your Control System at Level 2'. I released a silly little tool as part of the talk, one that probably hasn't gotten enough attention, even from me. That's a shame, because the tool is kind of interesting, and covers ground that we as security people "all know," but that we tend to forget about when we are doing security engagements for industrial customers.

The tool is the Modbus VCR. It is a plugin for the Ettercap framework which records Modbus/TCP (or really any cyclic traffic) for a period of time between a client and a server and later replays protocol state. The purpose of the tool is to show a really old, really dumb problem with control systems protocols: the lack of data integrity isn't just about control, it's about status, too.

This week's 'security news that fell through the cracks' is a vulnerability in GnuPG: CVE-2013-4402 is a curious little bug that allows a maliciously-formatted PGP message to consume infinite resources on a computer system.

The idea that there are parsing bugs in OpenPGP messages and keys shouldn't be a terrible surprise. The specification defining the format for an OpenPGP message is a touch complex, and plenty of implementations get things wrong.

Take the PGPDump utility, which has the sole purpose of parsing OpenPGP messages. It makes the unfortunate decision to use signed integers throughout its packet parsing, to ill effect. To see some problems for yourself, run pgpdump on Manual.gpg, a maliciously-crafted document (don't worry, it isn't terribly malicious -- it simply contains a large size in Field 1, which results in pgpdump reporting a negative size for the file stream). It is meant mostly as an example of why defining complex file formats and implementing the parsing and generation engines in the C language can be a daunting task.

The GnuPG programmers are crazy smart, plenty paranoid, and are way better C coders than i could ever hope to be. They are humans. I'm sure they do their best to secure the tool. We remain hopeful that this little bug will turn a lot of bug-hunters eyes to GnuPG to squash any bugs that might be lingering around the codebase.

A few months ago, I purchased a fun toy: a MSR606 Magstripe writer. As 'tech' goes, it doesn't rank very high: it simply reads and writes magnetic cards of the sort that the typical US credit card/debit card/hotel room key use. I purchased it to explore a potential vulnerability in a sales kiosk that, fortunately, proved to be unexploitable.

The amusement has come from using my own duplicated credit and debit cards. My wallet currently has a few of these blank, white cards that contain the data from my own credit cards. I have been using them constantly over the last two months.

The depressing part of this is that I've gotten quite good at using them. I make it a point to use one in every transaction that requires handing the card over to a human being. I sometimes get quizzical looks: "What is this?" "A credit card." "Uh..." "It's a Mastercard, the Carte Blanche," I reply, "Just swipe it and it will work."

I was rather happy to see a news headline touting that the FDA would begin regulating safety and security in medical software. Upon reading the full story, however, I have to say that I'm rather disappointed.

Why disappointed? Because mobile platforms are not designed to do anything critical. Take for example this excerpt from the End User License Agreement of iOS 7:

"""

7.5 YOU FURTHER ACKNOWLEDGE THAT THE iOS SOFTWARE AND SERVICES ARE NOT INTENDED OR SUITABLE FOR USE IN SITUATIONS OR ENVIRONMENTS WHERE THE FAILURE OR TIME DELAYS OF, OR ERRORS OR INACCURACIES IN, THE CONTENT, DATA OR INFORMATION PROVIDED BY THE iOS SOFTWARE OR SERVICES COULD LEAD TO DEATH, PERSONAL INJURY, OR SEVERE PHYSICAL OR ENVIRONMENTAL DAMAGE, INCLUDING WITHOUT LIMITATION THE OPERATION OF NUCLEAR FACILITIES, AIRCRAFT NAVIGATION OR COMMUNICATION SYSTEMS, AIR TRAFFIC CONTROL, LIFE SUPPORT OR WEAPONS SYSTEMS.""

Yesterday ICS-CERT released an updated advisory about the Schneider Modicon Quantum Ethernet boards. The advisory is vaguely-written and hides the fact that Schneider's firmware update breaks important functionality.

From the advisory:

"""This upgrade includes a new feature that allows the user to enable or
disable both the FTP and HTTP services on the modules. Disabling these
services will mitigate the vulnerability mentioned above. The following
products support the HTTP and FTP service enable and disable feature:

The Tofino blog has a post by Bob Radvanovsky that is quite an important issue for many ICS owners and operators. I've seen internet-connected water control systems with my own eyes, and have reported everything from building management to electric substations directly connected to the internet to ICS-CERT, vendors, owners, and anybody who will listen.

It's a great irony to me that the blog post appears on Tofino's website. I have a great deal of respect for Tofino's hardware and software, but their overarching message concerning field device security is muddled. Eric Byres and other SCADA Apologists say that removing insecure-by-design field devices in favor of secure-by-design ones is simply impossible for asset owners.

People often ask how I write PLC hacking tools while on the road. The answer used to be VPN access -- I would run a small openvpn server on my home network with a bunch of PLCs connected to it. I provided an Arduino with an ethernet shield, which controlled a relay to turn power on and off for various PLCs, and a linux server to run attack tools from (some attacks, like arp poisoning, just don't work over a vpn tunnel). When working with the great folks at Tenable and Rapid7 for Nessus and Metasploit module development, they loved the setup.

When I described the setup to a friend in the business, he said, "I would pay for access to that."

A major problem with finding security issues in industrial controllers is cost. You can find backdoor accounts and service stupidity easily enough with firmware analysis, but sometimes the more fun stuff (such as ladder logic upload over the normal 'SCADA' protocol) is a bit more tricky to find via straight binary analysis. If you spend just a few minutes with a live device, a lot of these issues fall out quite quickly.

So I am giving it some thought: would anyone be interested in access to a PLC VPN? It would have a few industrial ethernet switches (one currently has some 0-day that needs to be coordinated), multiple PLCs, and I guess a small server which ran a Windows Terminal Server and Linux VM for running configuration software and running hacking tools. It would also (of course) have a relay board for rebooting PLCs and RTUs when they inevitably crash. I could even wire it up to some real world stuff, like have PLCs connected to some lights and stepper motors, plus a webcam, so that you could watch what happens to the PLC in its various failure modes.

Is anyone interested in such a thing, to be used as a hacking playground? Would you be willing to kick in a few dollars to make it happen?

Halvar Flake gave a thought-provoking keynote at SOURCE Dublin this year. His premise is thus: in the past, shipping by sea was woefully insecure. Nations decided to create formal navies, recognizing that safe shipping was good for commerce.

Cue analogies to the new NSA Data Center in Utah, as well as projects like Perfect Citizen. Of course all physical analogies break down a bit once the term 'cyber' rears its ugly head, but in a way this all makes sense. Sure, utilities, banks, and other 'critical infrastructure' can never be physically moved to a handful of highly secure ports, but logically perhaps they could be.

An article last week got the Cyber Pacifists group in a tizzy. A congressional committee reported that utilities experienced over 10,000 'cyber attacks' per month.

My immediate and admittedly snarky reaction was to look through my log files. Last weekI experienced 34,000 SSH bruteforce attempts originating from 52 hosts in 18 countries. I also experience numerous web server attacks. This is on my single colocated server.

I could use the term 'cyber attack' to describe what I face every day, but it's abit counterproductive. I don't call my wife a criminal even if she does get parkingtickets from time to time (she pays them right away, don't worry). Similarly, 'cyber' fatigue will cause both companies and our representatives to make stupid decisions: to not focus security dollars on the areas that they should be focused upon.

Indeed most of the hacking going on right now -- spear phishing campaigns combined with intellectual property theft -- isn't really attacking either. It's espionage. It's weird espionage,to be sure: when in history have governments been so interested in stealing the researchof commerce? The 'who is doing it to who' is interesting in its own way because it sayssomething about globalization and how important governments really are anymore, but that'sbesides the point.

The only real 'cyber attacks' to date have been the well-known Stuxnet example, and thestill little-talked-about Syrian radar example (and how would we know if the latter is even true).