Pages

Thursday, March 31, 2011

Yesterday the RegInfo.gov web site announced that the Office of Management and Budget approved the EPA’s NPRM (Docket #: EPA-HQ-OAR-2008-0321) for the ‘2011 Critical Use Exemption from the Phaseout of Methyl Bromide’. According to the Administrations Fall 2010 Unified Agenda this NPRM was scheduled for publication in December of last year with a final rule publication date in May of this year.

Schedule for Rule

Since this OMB approval was approved ‘Consistent with Change’ it will be at least next week before EPA can get this NPRM published in the Federal Register. Even with an emergency 30-day comment period there is no way that this can get to a final rule published until June or July. The importation or manufacture of MeBr for the 2011 season cannot begin until the final rule is published. There should be ‘adequate’ stocks of the material on-hand for the pre-plant fumigation season.

Methyl Bromide and Appendix A

MeBr Rant Warning: Once again I want to point out that DHS did not include methyl bromide, a dangerous toxic inhalation hazard (TIH) chemical, on its list of DHS Chemicals of Interest (Appendix A to 6 CFR part 27) because it was being ‘phased out’ of use by law and international convention. It is produced, stored, transported and used throughout the country and it should be included in Appendix A.

Because MeBr is an environmental hazard (ozone depleting chemical) as well as a human hazard, it could be expected to draw the ire of radical environmental activists. Since it is mainly migrant farm workers who are exposed to this chemical in the workplace radical immigrants rights activists object to this chemical. All of which raises the risk of attack on facilities that make, store, transport or use MeBr by wacko-fringe elements of those activist organizations; not to mention the garden variety terrorist that just wants to kill people using poison gas.

Until MeBr is actually phased out (and environmental activists in California are slowing that phase out by their objection to the use of methyl iodide, a not-quite-so-toxic nasty TIH chemical) and the existing stocks disposed of or used, MeBr should remain on the Appendix A list of DHS chemicals of interest.

I was hoping to watch Under Secretary Beers and Deputy Under Secretary Reitinger appear before the Homeland Security Subcommittee of the House Appropriations Committee to discuss the National Protection and Programs Directorate FY 2012 budget request this afternoon. Unfortunately, some time between my blog on Saturday announcing this hearing and 11:30 this morning the decision was made to close this hearing to allow the discussion of classified information.

I am sure that NPPD has plenty of classified/sensitive data that should only be presented to Congress behind closed doors, but I am kind of surprised that the initial budget appearance for NPPD is a closed hearing. This would be more typical of an intelligence agency budget hearing.

Actually, the more I think about it the odder this becomes. There are no ‘black’ programs (one would hope) being run out of NPPD so there are no classified programs to be addressed in the budget. There may be classified intelligence involving programs, but that should not have a major impact on the budgetary process. Much of the information provided by covered facilities to the programs managed under the Directorate is unclassified but sensitive (SSI or CVI for example), but again, that should not have a major effect on the budgetary process.

It is not unusual for the answer or portion of an answer to a direct question by a committee member to touch on classified information. Typically the witness demurs from answering that question in open session and offers to provide that information in writing, in a private session, or in a separate closed session for the whole subcommittee depending on who is actually interested in the answer. To have the entire initial session closed is very strange and very suspicious.

Even if Beers had live intelligence information about an imminent attack that would not be presented to this Sub-Committee and certainly not in budget hearing for the next fiscal year. This is just too strange. My friends worried about black helicopters might have some ideas….

Last week Matt Franz, a reader, took me to task in a Tweet about a blog entry for mentioning ‘terrorism’ in a discussion about cyber security. He apparently felt that including terrorism sensationalized the discussion and made it less likely that the discussion would be taken seriously. I understand his point, there are not really that many people in the cyber security community that consider terrorism to be much of a threat in their realm.

Not a Terror Threat?

I think that there are generally three reasons for this point of view in the cyber security community. First, I know of no instance of a cyber attack that was related to terrorism. Many people with technical backgrounds are more comfortable basing predictions about future events on past history. The probabilistic tools that they routinely use in their technical lives rely on a history of past occurrences to predict the likelihood of future activity

The second reason is that I think many people in the cyber security community equate terrorism with a certain lack of technical sophistication, assuming that terrorists would not have the technical expertise to affect a cyber attack. Part of this is due to equating terrorist with countries like Afghanistan and Somalia where there is not a strong history of technical development.

Finally, I think that there is the cultural assumption that anyone with the technical expertise necessary to execute a cyber attack is part of the community and thus has a vested interest in maintaining the current political/social structure that terrorists are trying to tear down.

Technical Background

These points all rely on a misunderstanding of the historical reality of revolutionaries and terrorists. First everyone must remember that most large terrorist organizations are at heart revolutionary movements. As such, they draw their leadership from the political and economic elites. While the revolutionary (or terrorist) foot soldier may be little more than cannon fodder and are frequently poorly educated and economically disadvantaged, the leadership of the movements are almost always college educated and come from the societal ruling classes.

Al Qaeda is no exception. Bin Laden was trained as an engineer and came from one of the politically elite Saudi families. Petrochemical engineering and medicine are two very common backgrounds in the leadership of the organization.

Anyone who has spent any time on American university campuses across the country is well aware of the fact that we have a large number of people from countries from all over the world coming to this country for a wide variety of technical training. That training certainly includes software engineering and programming. While many of these people stay in this country for their subsequent employment, many return back home. It would be the height of arrogance to assume that none of these people join radical organizations.

Finally, we have seen numerous reports on the use of the internet as an organizing and recruitment tool by al Qaeda and its affiliates. While the production of web sites and the use of internet does not require the same skills as does hacking, it would be silly to assume that there no members of these terrorist organizations with the skills necessary to become a hacker.

Tools for Sale

Finally, we must remember that the recent history of cyber security has been marked by the development and commercialization of exploit tools. While many of these tools are used primarily by security researchers and security vendors, many are generally available and can be used by personnel with much less technical expertise than required to develop the tools.

This further lowers the bar that protects chemical facilities against cyber attack and makes it easier for less organized terrorist groups and even potential lone wolf attackers to execute a relatively sophisticated attack. As more and more vulnerabilities are found, publicized and weaponized it will soon become apparent that a cyber attack will be easier to successfully execute than trying to get past the physical defenses to place an IED or VBIED where it will do the most good.

Crooks and Competitors

Most cyber security experts expect that it is more likely that personal glory hackers, crooks or even commercial competitors will be the likely attackers exploiting the control system vulnerabilities. And I certainly agree that these will be the more common exploiters of cyber security shortcomings. They don’t require spectacular public successes to gain from their attacks, where as a subtle terrorist gains nothing. And subtle attacks are much easier to execute.

Interestingly industry and government are much less concerned about the non-ideological attackers than they are about terrorists. Part of that is because, in our society, the private sector is responsible for protecting itself against crooks. The government will investigate, arrest and prosecute after a crime has been committed, but prevention is mainly an individual or corporate responsibility.

Industry, of course, looses regardless of whom attacks them. So why aren’t they concerned about crooks and competitors attacking them? They just don’t understand on the corporate gut level how someone could gain from such attacks. Small business owners in high-crime areas understand, from personal experience, protection rackets and the dangers of paying danegeld. Large corporations on the other hand, in the United States at least, have not been widely exposed to this problem.

Since one of the aims of this blog is to influence both corporate and government movers and shakers to take cyber security seriously, I will continue to emphasize the terrorist aspects of the threat; since I think that is a real threat, I have no moral qualms about that. I will also try to educate those same people about the more probable threat of cyber attack based upon crime or competition.

Wednesday, March 30, 2011

I know that for the last week or so it seems that this blog has become the Cyber Security News, but that’s because of a slew of news reports that address vulnerabilities in control systems that might be used at high-risk chemical facilities. So I thought that this might be a good time to look at how the ability to remotely execute commands on a control system might be used for a physical attack on a chemical facility. The specific target was suggested by the key words used by an unidentified reader yesterday in a search that brought him/her to the site; ‘cyber security storage tanks’.

Storage Tanks

Most storage tanks are really nothing more than large metal drums holding some sort of chemical. Since everything is chemical, the only sure thing that we can say about a storage tank is that it is used to store chemicals. Those chemicals can be something as innocuous as water or air or as dangerous as chlorine gas or methyl isocyanate.

Probably the majority of storage tanks in existence in the United States have no connection to industrial control systems (ICS), but some large number of tanks are connected in one way or another to some sort of control system. An ICS can be used to monitor conditions (level, temperature, pressure, etc) within the tank. It can also be used to manipulate the contents of the tank through controls of mixing devices, loading or unloading valves, heating and cooling, or pressure manipulation.

Cyber Attack Vectors – Measurement Devices

The ability to manipulate the input or output signal of just about any monitoring or actuation device on a storage tank can be utilized to execute an attack on a high-risk chemical facility where a vulnerable control system is connected to storage tanks. Let’s start with the manipulation of signals from measurement devices:

Level Measurement: There are a wide variety of level measurement devices, but one of the most common is the delta-pressure (DP) device. Pressure measurement sensors are placed at the bottom of the tank and the top of the tank. Using the difference (delta) between those measurements and the programmed density of the material, the DP device calculates the height of the liquid column. Reducing the programmed density would lead to an under-reporting of the liquid level, allowing the tank to be overfilled.

Temperature Measurement: Most temperature measurement devices in storage tanks are used to make sure that the material is in the proper temperature range for the material to be moved from the tank. In some chemicals, however, the temperature is monitored for safety reasons; too high a temperature and chemical reactions start that can lead to uncontrollable chemical reactions that could produce toxic gasses or overpressure situations that could explosively destroy the storage tank. Manipulation of the output of temperature measurement devices could allow the temperature to rise above the critical value for that material while it looks like the temperature remains in safe ranges.

Just about any measurement device could be used as a means of physical attack on the facility. If the device monitors a physical parameter that could lead to an unsafe condition, the manipulation of that output of that device could lead to that condition.

Cyber Attack Vectors – Physical Control Devices

Almost certainly the most common control device at a chemical facility is the valve. Opening and closing valves can be used to physically move material into or out of storage tanks, manipulate the temperature by allowing for the flow of heating or cooling fluid through heat transfer devices, or manipulate storage tank pressures by opening vents or adding gasses to the ‘empty’ headspace in the tank.

The simplest sort of attack is to open the bottom valve on a storage tank when there is no hose or other transfer device attached to that valve. If the level reporting devices are manipulated at the same time, many storage tanks could be completely emptied before anyone becomes aware of the problem.

Catastrophic results could be obtained even if there are transfer lines in place to move the material to locations other than to the local environment. Moving material at the wrong time or in the wrong amount could lead to dangerous chemical reactions that could result in toxic gas releases, fires or explosions; all undesirable results from all but the terrorist’s or criminal’s perspective.

Again, just about any type of physical control can, under the proper circumstances, be used to affect a catastrophic result in a high-risk chemical facility. While manipulation of devices in the production process would require a detailed understanding of that process, much less sophisticated knowledge of storage tanks could allow for a catastrophic attack.

Storage Tank Cyber Vulnerabilities

These are just a few examples of the avenues that the manipulation of industrial control systems could be used to effect an attack on high-risk chemical facilities. Storage tanks may actually be the simplest targets for cyber attack. Since much less process knowledge, and thus intelligence collection or insider knowledge is needed to affect an attack on storage tanks, they are easier to attack. Furthermore, operators spend less time looking at storage tanks than they do process vessels so they would be less likely to detect an attack in progress.

Tuesday, March 29, 2011

Yesterday Eric and Joel published the second white paper in their series of publications dealing with the multiple-system vulnerabilities discovered/publicized by Luigi last week. This document, published on TofinoSecurity.com, deals with the vulnerabilities identified in the 7-Technologies IGSS platform.

There will be some that will point out the similarities between this white paper and initial publication on the ICONICS Genesis vulnerabilities. This was to be expected on a couple of levels; there are common vulnerabilities in the two systems, and many of the security responses would be the same for a variety of vulnerabilities. A closer look at this new publication shows the work done on identifying the differences between the vulnerabilities in the two systems.

One of the main differences here is that the vulnerable system is not just a HMI program, but is an actual Supervisory Control and Data Acquisition (SCADA) system. Additionally, the vulnerabilities affect two different executable programs within the systems communicating on two different ports.

This new white paper also includes six ‘compensating controls’ that owners/users should take to protect their systems pending the publication of patches by 7-Technologies. Five of these controls are the same as those found in the initial white paper, which is not surprising since they should already be in place in any ICS security program.

The one new control replaces the recommendation to change the default port used in the Genesis system. The new control recommends the installation of an intrusion detection system to help the user/owner to detect someone trying to exploit these (and any other un-reported) vulnerabilities. This recommendation was made possible by the recent release of IDS signatures for the IGSS platform by the two IDS systems identified in this white paper (and no, neither is produced by Byres Security).

Another good piece of work by Joel and Eric. I look forward to seeing the two white papers on the remaining systems identified by the Luigi vulnerability release.

Yesterday the Senate Commerce, Science and Transportation Committee announced that the cyber security hearing previously scheduled for today has been postponed. No reason was given and a new date/time has not yet been announced.

Dale Peterson from DigitalBond.com (FULL DISCLOSURE: I sometimes write legislative issues blog posts for DigitalBond.com) left a comment on yesterday’s blog post on software bundling issues. Dale writes that: “Owner/operators need to demand that their vendors provide a list of all software on each component for the security patch management program.” I absolutely agree and believe that if enough users start to demand this that vendors will make this listing part of the standard documentation included with the software.

User Knowledge Base

The problem is that most owner/users don’t realize how common this bundling issue is. It takes a relatively sophisticated user to understand how these components go together and interact. Dale notes that:

“This is more than the OS and major third party apps like databases, web servers, etc. It also includes components that are not visible like JRE or a SISCO ICCP stack or TMW DNP3 stack or in this case Iconics OEM code.”

Now I’m a fairly sophisticated user (not a software engineer or programmer but just a chemist that worked process issues) and I’m not sure I know exactly what Dale is describing with terms like ‘JRE’ or ‘SISCO ICCP stack’. I’m pretty sure that a very large number of owners/users are at my level of software sophistication. I’d bet that this is part of the reason that vendors don’t push more information on bundling.

Now this is one of the reasons that I’m pushing this issue in this blog. The more people that become aware of the bundling issue and the impact, the more likely they will be to request this information in system documentation, and maybe it will become common enough that vendors will include the information automatically.

Patch Management

Dale makes another important point in his comment; “Of course the real challenge is for the vendors to include patch compatibility testing for more than the OS.” One would expect that companies like ICONICS, who have an active development program with OEM customers, would be working with those customers to adapt the component patches into the OEM patch management.

From the debate we keep hearing about the vulnerability disclosure issue, however, I would be surprised if there was much of this sharing going on. If there were more active vulnerability identification and patching going on voluntarily then there would not be as vociferous a debate going on in the security research community on whether or not vendors should be notified about newly discovered vulnerabilities.

I have heard too many researchers cite horror stories about being ignored by vendors to believe that the bulk of bundled vulnerabilities will dealt with unless there is outside pressure brought to bear. Again, blog posts like this will be part of the necessary pressure, but I certainly don’t believe that a few blog posts in this particular venue will make all of the vendors see the light. I’m afraid that, lacking a major move on the part of the ICS software industry, the only way that this is going to be effectively addressed is if it is identified as a security requirement in comprehensive ICS security legislation.

Of course, Congress is even less likely to adapt this issue into legislation than are most vendors to voluntarily fix the problem without legislation. Given human nature as expressed in corporate policy or legislative action, nothing significant is going to be done until there is a major security incident involving un-patched bundled-software. Then we can count on Congress to over-react and address the issue with the finesse of a bull dozer.

Oh well, I’ll be there to say ‘I told you so’. In the meantime I’ll be watching to see how many systems are identified in subsequent ICS-CERT advisories after ICONICS announces the availability of their patch for the ‘Luigi vulnerabilities’.

Monday, March 28, 2011

Friday the DHS Industrial Control System Cyber Emergency Response Team (ICS-CERT) issued an advisory on the effects that may be seen on industrial control systems when an energetic solar magnetic storm produces secondary electrical effects in Earth’s atmosphere. While not specifically addressed in this advisory, it is clear that similar effects can be expected in the electronic systems associated with many security systems employed at high risk chemical facilities.

Solar Weather

The advisory provides a brief description of the events on the Sun’s surface that provide the energy for these physical effects found in our atmosphere. The advisory describes three different types of solar events that are associated with sunspot activity:

• Coronal Mass Ejections (CME) of large clouds of charged plasma containing embedded magnetic fields that can arrive 18 hours or more after the solar flare eruption.

The National Ocean and Atmospheric Administration’s (NOAA) Space Weather Prediction Center is responsible for providing daily solar weather forecasts as well as alerts and advisories for solar flare and CME events that could have Earth based effects.

Control System Interference

The advisory describes three different ways that these solar disturbances can interfere with the operations of industrial control systems. Those types of interference are:

General mitigation measures are outlined for these different ways that solar weather can interfere with ICS operations.

Facilities need to look at their potential exposure to these types of effects on their individual control systems. This would be done as part of the general vulnerability assessment done on their control systems. Those with the highest potential exposure to the solar weather effects need to insure that they monitor the daily solar weather forecasts.

Sunday, March 27, 2011

Readers of this blog will probably remember my discussion of the problems that can arise with bundling software components when one of the components has a vulnerability. As part of a discussion with Joel Langill about the white paper he and Eric Byres did on the ICONICS Genesis vulnerabilities identified by Luigi, I asked if there might be an issue with the Genesis32 and/or Genesis64 HMI systems being bundled with control systems sold by other vendors. Joel noted that I had “really opened up a sort of Pandora's box” with that question and pointed me at the ICONICS web page discussion of teaming with other OEM vendors.

Bundling Genesis32 and Genesis64

The ICONICS web site explains that “OEM partnerships are integral to ICONICS' core business”. While some OEM vendors may choose to identify the ICONICS components they sell, other partners “prefer to market their solutions using our software under their own brand name”. So there may be other control systems that have the same vulnerabilities because they include the Genesis components.

The ICONICS web site includes a listing of some of their OEM partners, but does not say which ones use which of their multiple products. The names include some well known names in the chemical industry:

Again, there is no way of telling from the information provided which of these partners might be using the Genesis32 and Genesis64, either using the ICONICS name or partner rebrand. The only way a facility would know is if their vender notified them.

So if anyone has a control system supplied by one of the OEM partners listed on the ICONICS web page (and I have listed less than half), they should probably contact their technical representative and ask them if the system on their site contains the Genesis32 or Genesis64 systems. If it does, then the facility security management team needs to take a look at the Tofino whitepaper.

If the control system at your facility includes the Genesis components, this does not necessarily mean that when the ICONICS patch is made available that it will be applicable to the OEM implementation of those Genesis components. Again the ICONICS web site explains that their team works closely with the “OEM development teams [to] ensure tight integration of ICONICS technology with our partner’s solutions offerings”. That integration process may require adaptations of the patch to make it compatible with the OEM supplied system; again contact your tech rep for clarification.

ICS-CERT and Bundling

Now, when ICS-CERT issued their ICONICS Alert last week, there was no way that they knew about the OEM issue. Their information was based upon the information provided by Luigi on Bugtraq. But when ICS-CERT notified ICONICS one would like to think that ICONICS would have informed them that their affected systems were also bundled with other vendor systems. That would have allowed ICS-CERT to work with those suppliers to get the vulnerabilities addressed.

Alternatively ICONICS could have decided to address the issue with each of the vendors that used the Genesis systems. There would be some justification for not notifying end users until a patch was made available for those systems; the Luigi exploits do not specifically address the OEM variants so no one know that those systems are vulnerable. This is fine as long as those variants are patched in a timely manner.

Unfortunately, with no formal requirement to address the issue of bundled software, there is no way for the user community to know if they have affected systems or if their systems are being adequately protected against the exploits developed for the base programs.

Saturday, March 26, 2011

It’s going to be a busy week for Congressional hearings of interest to the chemical security community, CFATS, cyber security and weapons of mass destruction will all get their hearing in Congress this week

FY 2012 Budget Hearings

While Congress has yet to fully fund FY 2011 they are still working on the FY 2012 budget process. This week DHS’s National Protection and Program Directorate (NPPD) gets to explain-justify the President’s budget request before the Homeland Security Subcommittee of the House Appropriations Committee. Under Secretary Beers and Deputy Under Secretary Reitinger. Both CFATS and CERT show up in this area of the budget so this will be an important hearing for the community. This hearing will be on Thursday at 2:00 p.m. EDT.

CFATS

While CFATS may be discussed in the budget hearing, it will be the main topic of conversation before the House Energy and Commerce Committee. On Thursday at 9:00 a.m. EDT they will hold a hearing on HR 908, Rep. Murphy’s (R, PA) Full Implementation of the Chemical Facility Anti-Terrorism Standards Act. Chairman Upton (R, MI) has already publicly endorsed this bill which is why it’s first appearance will be a full Committee hearing. The witness list has yet to be published.

This bill is the simplest of the four bills introduced to date to extend the CFATS authorization. This one simply extends the current expiration until October 4, 2017. That doesn’t mean that this will necessarily be a simple hearing. In the 112th Congress this Committee was responsible for the addition of water treating facilities to HR 2868 so there will be at least some interest in adding provisions to this bill to remove the water facility exemption to CFATS. We’ll watch to see if the American Water Works Association has a witness at this hearing.

Cyber Security

Again, while cyber security may be a topic at the budget hearing, it will be the main focus before the Senate Commerce, Science and Transportation Committee on Tuesday at 2:30 p.m. EDT. Sen. Rockefeller’s (D, WV) Committee will look at the consequences of cyber attacks with witnesses from the FBI, IBM, Verizon and the American University.

There probably won’t be much in the way of specific mention of control systems. Still, since Sen. Rockefeller has yet to introduce cyber security legislation in this session, it would be prudent to watch this hearing to see if we can tell what he will include in his inevitable bill.

WMD

Finally, it must be time for me to go to the Dentist for a cleaning because the road show team from the 9/11 Commission is back before Congress. They are going to be appearing before the Senate Homeland Security and Governmental Affairs Committee on Wednesday at 10:00 a.m. EDT to update Senators Lieberman (I, CT) and Collins (R, ME) on the risk of weapons of mass destruction. One always hopes that someone will sooner or later mention that the cheapest and easiest WMD attack would be a conventional terrorist attack on a high-risk chemical facility, but I expect that they will once again harp on nuclear weapons and biological warfare, lest we forget.

To Be Announced

Some time late this week there will be a new bill introduced providing for the funding of the Federal Government for the rest of this fiscal year beyond April 8th and there will be a House Rules Committee Hearing on the bill. Hopefully that will take place this week, allowing for enough time for subsequent action on another short term spending measure next week before the current CF expires. It is too much to expect that the Appropriations Committee will ever meet on one of these FY 2011 spending bills.

Joel Langill, SCADAHacker.com, and Eric Byres, TofinoSecurity.com, have taken an in depth look at the vulnerabilities reported earlier this week in the Genesis32 and Genesis64 HMI software and have produced a white paper on the subject. They describe the vulnerabilities, the potential consequences of an attack using the vulnerability, and provide a short list of ‘compensating controls’ to put into place pending the expected patch from ICONICS.

The blog post on the Tofino Security website introducing the white paper does make a brief pitch for some of the Byres Security technology that would be used in the compensating controls (they are in ‘the business’ after all). The White Paper only provides a brief mention of a Tofino product in one footnote [corrected mispelling of Tofino. Sorry Eric, et al, 17:41 3-26-11]. I think that this is a completely justifiable level of advertising in this type of information product.

Design Exploit

One of the interesting things that comes out clearly in this report is the fact that these vulnerabilities (13 identified by Luigi in these two ICONICS products) are exploitable because of the inherent design of the system. Because the purpose of a human machine interface is to facilitate communications between the operator and the various components of the SCADA system, communications ports on the system must be enabled. This provides a potential route for remote exploitation of any security flaws in the system.

This is the reason that Joel and Eric address two communications issues in their compensating controls. First they recommend changing the default port used by the Genesis systems; this makes it harder for a hacker to find the access point to the system. They also recommend the installation of an industrial firewall on this port to limit the traffic that can enter the system.

I asked Eric if this necessarily open communications port, even with a firewall installed, could allow the type of peer-to-peer network communications utilized by Stuxnet to keep that worm updated. Eric noted that, using a firewall that auto generated the rules for allowing transmission, the diligent user approving those rules would probably notice the P2P traffic if it “was on a port or to a machine that wasn’t part of the regular ICS traffic patterns”. Unfortunately, auto approving those auto-generated rules would leave the system very vulnerable.

This is quickly becoming an important part of ICS security. The facility must be aware of the routine communications to, from and between various parts of their control systems. It is probably no longer practical (or perhaps even possible) to completely air-gap a complex control system (safety systems are a completely different story). This means that the cyber security manager must be aware of routine required communications so that the intrusion attempts can be identified. This requires communications logging and routine and frequent reviews of those logs.

Patches

The six compensating controls described in this white paper will not fully protect the system against exploits of the 13 vulnerabilities identified by Luigi. They can reduce the threat and make it easier to detect an exploit, but to remove the vulnerabilities requires a system patch. There is no official word from ICONICS on how long it will take to get a patch in place. Eric believes that the fix will be relatively simple (I always worry when software people talk about relatively simple fixes). He did tell me that he knows that ICONICS is “working really hard on getting something out to their users”.

Eric also makes another interesting point about patches. He told me that: “The question after that is ‘how quickly will users deploy the patch’? Unfortunately many companies still are not that efficient at getting patches deployed in an ICS.” This is too true and with a reported 250,000 copies of Genesis installed world wide it doesn’t take too high a percentage of non-patched or slow-patched systems to leave a large number vulnerable.

Other Luigi Vulnerabilities

I also asked Eric if he and Joel plan on doing a similar white paper on the three other systems (Siemens Tecnomatix FactoryLink, 7-Technologies IGSS, and DATAC RealWin 2.1) that Luigi identified as having vulnerabilities. The simple answer is yes. They hope to have the next one out on Monday (it will be a long weekend for those two).

One other issue; you have to be a registered user on the TotfinoSecurity.com web site to be able to download the white paper. I know that many people object to registering with a web site figuring that it opens them to receiving SPAM. This is the way the Eric runs his business and he is still giving away the fruits of his labor for free, so he gets to set the rules. I will say this; I have been registered on this site for quite some time and have not received any communications from Byres Security, commercial or otherwise on the email address that I used on that registration.

The vulnerability would have allowed the execution of an unauthenticated SQL statement, allowing a moderately skilled remote attacker to execute arbitrary code. There is no known publicly available exploit for this vulnerability.

Disclosure Debate

This vulnerability highlights an ongoing debate in the cyber security community; how to deal with the announcement of the discovery of vulnerabilities. One side of the debate would have researchers who discover vulnerabilities report them only to the vendor so that there would be no public notification until mitigation measures (a patch for instance) was available. The other side feels that the user community should receive the initial notification so that they would know that they are vulnerable to attack.

Actually the debate is more complicated than that with a third view that feels that vendors either ignore researchers’ reports of vulnerabilities or fix the problem without giving the researchers adequate credit for discovering the vulnerability. Additionally there is the whole question about how independent researchers are compensated for the work that goes into discovering and documenting these vulnerabilities.

This is an important debate on a number of levels, but I would like to look at it from two points of vies that don’t seem to me to get enough attention, the user point of view and the point of view of the regulators and legislators.

User Point of View

Most control system users are never aware of any vulnerability in their control systems unless they are notified of the problem by the vendor. Frequently patches are installed without the realization of the underlying reason for the patch. Because of the problems associated with shutting down processes, installing and testing patches, patch installation is delayed until scheduled shutdowns or just ignored since there is no apparent problem with the system. This may leave the systems vulnerable to attack for prolonged periods of time.

The question then arises how could these facilities respond to early notification of a vulnerability? If no patch was yet available what would the facility do? The most that could really be done would be to pay closer attention to the system. If the other mitigation measures typically recommended by ICS-CERT were not already in place (isolation from internet, firewalls, etc) then a facility might see the reason for their implementation if they knew that there was a particular vulnerability.

This is the main reason for the public discussion of these vulnerabilities; as the user community comes to realize that control systems are vulnerable to potential attack they might be more likely to take the relatively simple minimum defensive measures recommended by ICS-CERT. The use of those defensive measures might be more likely if the vulnerabilities were publicly released before patches were available. Advanced defensive measures are certainly unlikely to be used if the user is not aware of potential vulnerabilities to attack.

Regulatory Point of View

As we continue to see discussion of the possibility of cyber security legislation it is important that the people writing that legislation and the potential regulators are aware of this discussion. One of the things that has been missing in the discussion of cyber security in general, and ICS security specifically, has been a discussion of the legal liability of software producers for vulnerabilities in their products.

One of the things we have seen in the researcher debate is the question of who is responsible for the ‘deplorable state’ of the quality of the software. Some people say that the problem is with the vendors/developers taking shortcuts in the development process. Others say that the user community is responsible because of their demand for lower costs and lack of demand for security measures. One thing is certain; there is currently no financial incentive for an ICS developer to ensure that there are no security bugs in the systems that they sell.

Any comprehensive cyber security legislation needs to address the issue of software vulnerabilities. I am not going to suggest that there be a ‘zero defects’ requirement, but there does need to be some standard set for how vendors deal with reports of vulnerabilities. As a starting point for the discussion of how that might look, I would like to suggest this:

• ICS-CERT should be tasked with verifying reported vulnerabilities in control systems;

• Vendors should be required to share vulnerabilities reported to them with ICS-CERT;

• Researchers should be encouraged to report vulnerabilities to ICS-CERT instead of vendors;

• Vendors should be required to notify customers of all vulnerabilities verified by ICS-CERT within 30-days along with suggested interim security measures;

• ICS-CERT should be given authority to set time limit for patch development; and

• Vendor compensation of researchers should be established for each verified vulnerability.

Thursday, March 24, 2011

I ran across an interesting article at HSToday.us about the latest incident of the use of a ‘cloned’ vehicle on the US southern boarder. In this case a privately owned vehicle was marked (including stolen plates) as a US Marine Corp van was used to successfully smuggle people across the border. Fortunately Border Patrol agents became suspicious of the vehicle and stopped it at a checkpoint on I-8.

Cloned vehicles are apparently relatively common tools used by smugglers and point out a common security problem. Too many people relax their vigilance when they see a vehicle that they assume represents a common visitor to a site. This can result in that vehicle receiving less than adequate security screening.

While it would not be expected that terrorists would use a cloned USMC vehicle to attack a high-risk chemical facility (most chemical facilities would not expect to see a USMC vehicle) this incident just goes to show how common this technique is. Security managers want to pay attention to how their security forces treat vehicles from common delivery services. If they only get a cursory review upon entry or exit then there is the potential that terrorists could use them as a method of infiltrating or exfiltrating the facility security perimeter.

For this vulnerability Ruben Santamarta, the security researcher who had identified the vulnerability, had previously notified ICS-CERT of the problem, but BroadWin had not been able to validate the vulnerability. As a result Ruben publicly released details, including the exploit code, leading ICS-CERT to publish this alert.

Gleg, Ltd Update

On the 0-day vulnerabilities that I reported on yesterday, there is some question on if these are really new vulnerabilities or just re-reporting of ones that have already been identified by ICS-CERT. Dale Peterson, DigitalBond.com, notified me yesterday that he believes that they are previous identified vulnerabilities. Joel Langill, who reported the Agora SCADA+ coverage of these vulnerabilities on his SCADAHacker blog, is not convinced and is conducting further research. Apparently ICS-CERT agrees with Dale as they did not publish any alerts on the reported vulnerabilities. I’ll clarify the situation as more information becomes available.

Luigi Vulnerability Updates

Joel is also reporting on the TotfinoSecurity.com blog that he and Eric Byres decided to take a closer look at the systems on which Luigi reported multiple vulnerabilities earlier this week to see if there were additional vulnerabilities in those systems. Sure enough they found another vulnerability in the first system they checked. Joel did not provide details on the vulnerability in the blog as they have reported the issue to ICS-CERT to allow them to work with the also not identified vender to correct the problem.

Again, this just further emphasizes the point I made yesterday. As security researchers (and presumably less ethical hackers) begin to seriously look at ICS software, they are going to be finding lots of vulnerabilities. These newly identified vulnerabilities are going to increase the likelihood of actual attacks on control system and will make it easier for terrorist to obtain the tools necessary to conduct remote attacks on high-risk chemical facilities.

Wednesday, March 23, 2011

Today the folks at DHS Office of Infrastructure Protection update the web site for the 2011 Chemical Sector Security Summit (2011 CSSS). Actually, it was updated from a web page to a web site. Two main changes were made to the site; a registration page was added to the site as was an outline of the preliminary agenda. If readers had taken advantage of the sign-up link on top of the landing page, they would have received an email this morning informing them of the change on the web site.

2011 CSSS Page

The landing page has been completely re-worked with only the 2011 CSSS Poster remaining on the page. It provides some overview information on the registration, a link to the Hilton Baltimore, and some additional information on the pre- and post-summit programs that have been added this year.

If you have any questions about the usefulness of the information that will be presented, just look under ‘Related Resources’ in the box on the right side of the page to find links to some of the presentations from the 2008, 2009 and 2019 Summits.

The Pre-Summit Demonstrations on July 5th provide attendees with exposure to three different on-line tools of use to the chemical security community. They are:

● Web-Based Chemical Security Awareness Training;

● Voluntary Chemical Assessment Tool Demonstration; and

● Navigating the Chemical Security Assessment Tool (CSAT) Help Desk and Web Site

The Post-Summit Workshops on July 8th will provide training on areas of particular interest to the chemical security community. The three workshops are:

● Chemical Sector Explosive Threat Awareness Training;

● Tabletop Exercise Workshop; and

● Control Systems Security Workshop

Registration for any of these extra-Summit presentations may be done on the CSSS Registration page.

Registration

Two major changes to the registration process this year. First DHS has farmed out the responsibility for the registration process to CVent; they have the expertise, might as well let them do that job and free up DHS folks for regulatory work. The second is that this year DHS is limiting registration to two people from “each organization, company, and agency” instead of the three people allowed last year.

Again DHS is trying to ensure that the widest possible audience is allowed to attend. I hope that this year they consider adding an on-line component providing the vast bulk of the chemical security community a chance to have complete access to the presentations. DHS will undoubtedly provide copies of the slide presentations after the Summit is over, but the slides do not contain all of the information provided in the live presentations.

Preliminary Agenda

I’m not going to waste your time with a listing of the complete preliminary agenda, just go to the web site if you’re interested in perusing the entire agenda. I would like to highlight some areas that I think make this Summit worth attending. They include:

● Inspections Process Lessons Learned (available in each ‘breakout’ period on July 6th)

● Update on Chemical Security in the Agriculture and Food Chain

● Personnel Surety Panel

Networking

As with any show like this it is the opportunity to meet others in the chemical security community that is probably the most valuable part of the CSSS. The chance to talk to chemical security professionals from other facilities provides a learning opportunity about what has worked and not worked for other facilities. A very smart man once told me that a smart man learns from his own mistakes, but a brilliant man learns from the mistakes of others.

One Nit to Pick: The date on the bottom of these pages would tend to make one think that this change took place on Monday. These web pages were updated after 8:00 am EDT today.

As if the 34 new SCADA exploits reported by Luigi were not going to be enough of a problem for the chemical security community, yesterday Joel Langill posted a copy of an advertisement from Gleg Ltd for the Agora SCADA+ exploit pack for CANVAS on his SCADAHacker blog. This ad claims to provide exploits for ICS software from ClearScada, DataRate and Indusoft. As of yesterday evening there was no ICS-CERT alert associated with these reported vulnerabilities.

Security Researcher Debate

The public availability of these 45 new SCADA exploits from security researchers before the software vendors were provided a chance to fix the problems has provoked some discussion within the cyber security research community. A number of researchers have made it clear that they would prefer to see the vendors given a chance to correct these problems before these exploits are made publicly available. This would reduce the risk to the user community as they would have a chance to upgrade their systems before these attack tools became generally available.

Other researchers have maintained that most of these recently released exploits deal with vulnerabilities that are very similar to those that have been reported in non-ICS systems for years now. They look at these vulnerabilities as problems that should have already been corrected by the vendors so they see no reason why the exploits should be held back.

This debate is of more than theoretical interest to the chemical security community. The public availability of these exploits potentially puts chemical facilities at risk for an attack on their control systems. These vulnerabilities already existed, but with these exploits publicly available, it has become easier for less talented hackers to utilize these vulnerabilities as entry points for attacks. These facilities will remain at higher-risk of potential attack until patches become available for these vulnerabilities.

New Era of ICS Cyber Security

Stuxnet made it clear to the whole world that attacks on industrial control systems were a very real possibility. A whole host of professional security researchers, black hat hackers, and interested amateurs are now directing their efforts at finding new vulnerabilities in ICS systems used in a whole host of critical infrastructure facilities. It seems inevitable that they will continue to find new vulnerabilities in more control systems.

As these exploits become more widely available, it is equally inevitable that they will be used in attempted attacks. Most of these attacks will have limited success, but even minor upsets at high-risk chemical facilities, for instance, can have terrible consequences. Similarly, other critical infrastructure facilities using these control systems will also see an increase in the attacks that will have adverse consequences.

Because of the interconnectedness of modern industrial production, attacks on any portion of the supply chain, including utilities, can have serious consequences for facilities with no direct exposure to the attack. Congress needs to recognize that the world has significantly changed. ICS security is no longer an academic issue; it will have real world consequences in the near future.

Legislation

It is essentially too late for Congress to take a leadership role in this area. I expect that news of consequences of attacks to make headlines, long before a real ICS Cyber Security bill makes it through the legislative process (lacking a high profile attack any real ICS security legislation is probably years away, we’ll just have to do with the makeshift stuff in current bills). That means that, if we do have an attack with catastrophic consequences (and in today’s economy any economic consequence could end up being catastrophic), then we can expect Congress to over-react in what ever post-attack legislation they write.

I suppose that it’s too late to worry about that now. The attack code that will precipitate the cyber security crisis is probably being written now. Maybe the cyber attackers (terrorists, criminals, international business rivals, what ever) will be as incompetent and stupid as the terrorists have been since 9/11. We can always hope...

Tuesday, March 22, 2011

Yesterday afternoon a Nuclear Regulatory Commission (NRC) meeting-notice was posted on the Federal Register ‘pre-view’ web site about a public briefing on the “NRC Response to Recent Nuclear Events in Japan”. While that subject is not of direct concern to the chemical security community (though everyone should have some concerns about the issue) I am mentioning this here because of some administrative problems this notice underlines in the communications tools available to the Federal Government.

The problem with this notice is that it was posted to the preview site at 4:15 pm EDT yesterday for actual publication in the Federal Register scheduled for March 23rd. Now that isn’t too unusual until you consider when the public meeting is (actually was) scheduled; March 21, 2011 at 9:00 a.m. EDT. In other words the public notice was ‘published’ (still not yet legally published, but actually published) seven hours after the start of the meeting.

Now, I don’t think that the NRC was attempting to hide anything, and I even think that I briefly saw a clip from this briefing on the late news last night (it was short, and it was late, and I wasn’t completely awake), but this isn’t what we really want to see when dealing with critical information like this. We want to see as much advance notice as is practical.

I do have two questions. The notice states that the NRC voted on 3-15-11 to hold the meeting without giving the ‘required’ one-week notice; I understand and approve of that. But why wasn’t the notice given to the Federal Register folks until Monday afternoon? And then why was the announcement delayed until Thursday instead of on Tuesday? I would have been able to type up the meeting notice provided on the 15th and it could have been to the Federal Register folks on the 15th, 16th, 17th or 18th and still provided some public notice of the meeting.

I suspect that it was held up for bureaucratic reasons, waiting for release authority before it was sent out. If that’s the case, then someone needs to answer for that delay because this long a delay for the release of time sensitive information is inexcusable. If there was some other reason, legitimate or otherwise, that also needs to be shared.

The government has a responsibility to provide timely information to the public and notice of public meetings is one important type of timely information. When circumstances limit the time available for providing notice, extraordinary efforts must be taken to provide as much notice as possible. Finally, there is never any excuse for providing notice of a meeting after the meeting is over.

Yesterday the RegInfo.gov web site announced that the Office of Management and Budget had approved a DOJ interim final rule dealing with providing reimbursement of victims of international terrorism. As I mentioned in January when this rule was submitted to OMB, this is a ‘new’ DOJ initiative that was not included in the Administration’s Unified Agenda printed in December (75 FR 79449).

The OMB approval was ‘Consistent with Change’ which presumably means that OMB is requiring some relatively minor changes to the document before it is published in the Federal Register. I suppose that we can expect to see this in the next couple of weeks.

Yesterday evening the folks at the DHS Industrial Control System Cyber Emergency Response Team (ICS-CERT) took the unusual move of publishing four separate control system vulnerability alerts. Of potentially more interest, they took this action because a single security researcher, Luigi Auriemma, published 34-separate exploits for 0-day vulnerabilities in those four systems. Oh, yes, the other important piece of information, Luigi is not a SCADA expert, ‘just’ a prolific writer of 0-day exploits.

ICS-CERT was very prompt in issuing these four alerts. The exploits were published yesterday and less than 12 hours later the alerts were posted on their web site. Of course the fact that the cyber security community was actively discussing Luigi’s feat on-line probably made it easier to get the bureaucratic approval necessary to publish the alerts in a timely manner.

Underlying Issue

Luigi described the vulnerabilities this way:

“In technical terms the SCADA software is just the same as any other software used everyday, so with inputs (in this case they are servers so the input is the TCP/IP network) and vulnerabilities: stack and heap overflows, integer overflows, arbitrary commands execution, format strings, double and arbitrary memory frees, memory corruptions, directory traversals, design problems and various other bugs.”

This just goes to show that the same problems that the general software development community has been dealing with for years probably exist in the ICS software. As more security researchers (both ‘good’ and ‘bad’) turn their attention to control systems, it seems inevitable that more 0-day vulnerabilities, probably many more, are going to be found.

That this problem exists is not a new idea. Dale Peterson over at DigitalBond.com put it this way yesterday:

“Realistically though, there is a huge amount of legacy code out there with latent vulnerabilities waiting for smart guys like Luigi to find. Vendors that are making their software available for download have to expect that someone in the security research community, and probably some bad guys, will download the product just to find vulnerabilties and build exploits. We mentioned this in previous blog entries, but hopefully 34 vulnerabilities will prove the point.”

For the user community this means that, if Stuxnet was not enough of a warning, Luigi pointed out yesterday how easy it would be for even a moderately talented hacker (Please, I am not saying Luigi is just ‘moderately talented’, that is obviously not true) to attack a system. With the exploits published yesterday, owners of systems that contain these programs don’t even have the minimal comfort level that their systems would require a moderate skill level to attack. The basic hacker now has tools available to be able to access those systems.

Mitigation

How long will it take to get patches for these vulnerabilities? We’ll have to wait and see. Remember, though, the software development cycle started yesterday. Don’t hold your breath; it takes time to fix these things.

In the meantime ICS-CERT provides this generic guidance in their alerts:

“Control system devices should not directly face the Internet.1Locate control system networks and devices behind firewalls, and isolate them from the business network. If remote access is required, employ secure methods such as Virtual Private Networks (VPNs).”

Today I would like to start looking at the requirements in the NPRM for developing written operating procedures for the bulk transfer operations involving tank trucks.

General Requirements

The new §177.831(b) would require that anyone that was required to do a bulk transfer risk assessment in paragraph (a) “must develop, maintain, and adhere to an operating procedure for the specific loading or unloading operation based on the completed risk assessment”. According to this new paragraph of the HMR, those procedures must address seven general areas:

PHMSA recognizes that facilities may already be taking measures under other regulations that could be used to fulfill portions of the requirements for these newly required bulk transfer operating procedures. Those regulations include:

Section 177.831(b)(5) would essentially apply current HMR design, maintenance, and testing requirements of part 178, subpart J (and §180.416 for compressed gasses) to the facility side of these transfer operations. It clearly specifies that these requirements pertain to transfer equipment and systems “including pumps, piping, hoses, and connections”. This is in keeping with the hazard assessment requirements to address “any device in the loading and unloading system that is designed specifically to transfer product between the internal valve on the cargo tank and the first permanent valve on the supply or receiving equipment” in §177.831(a).

There is a potential hole in the requirements of this section. Sub-paragraph (b)(5) specifically states that each “person who conducts these operations [emphasis added] must develop and implement a periodic maintenance schedule”. This would seem to imply that facilities that just supply such equipment for the use of carriers to actually conduct the operations would not be required maintain that equipment.

The discussion in the preamble specifically contradicts this apparent oversight stating that:

“PHMSA is proposing to require facilities that provide transfer equipment that is connected directly to CTMVs and used to load or unload product from the tank, to implement maintenance and inspection programs consistent with existing standards for hoses carried aboard CTMVs. At a minimum, the operational procedure must include a hose maintenance program.” (76 FR 13321)

Courts typically take cognizance of the explanatory comments in the preamble to these rules, but it sure makes it difficult for people to comply with regulations when these ‘minor’ details are left out of the regulation.

Record Keeping

Section 177.831(b)(7) sets for the record keeping requirements for these operational procedures and the supporting hazard assessment. Actually, the requirement to keep a copy of the hazard assessment with the operating procedures is found in §177.831(a)(3). Facility based procedures must be available at the facility where the loading and/or unloading is conducted.

Carrier based procedures must be available in the truck involved in the loading/unloading operation. In fact, PHMSA is modifying their information collection request (76 FR 13324) for shipping papers (OMB Control No. 2137-0034) to include the requirement for this document. This certainly implies that these procedures would be carried with the shipping papers in the driver’s door of the truck.

Pre-approval of the operating procedures is not required but these procedures must be produced upon demand to any “authorized official of a Federal, State, or local government agency at reasonable times and locations”.

Other Requirements

I’ll discuss some of the other requirements for these operating procedures in future blogs in this series.

Last week Rep. Young (R, AK) introduced HR 1143, the TWIC Delivery Act of 2011. This bill would require that Transportation Workers Identification Credentials (TWIC) be delivered by U.S. mail to any applicant that resides more than 100 miles transportation security card enrollment center. Currently all TWICs are required to be picked-up in person at the enrollment center so that the recipient’s identity can be verified.

Sunday, March 20, 2011

In a blog posting earlier this week I discussed some of the considerations that would need to be included in the hazard assessment to be required by the newly proposed PHMSA rules for bulk loading and unloading of hazardous materials to and from tank trucks. Readers of this blog might have been surprised that I didn’t discuss the review of a potential terrorist threat as one of the hazards that would need to be included in the proposed hazard assessment. The reason is simple; PHMSA specifically excludes security issues from consideration in their NPRM.

In the preamble to this proposed rule PHMSA states that: “Security and incidental storage of bulk transport tanks are beyond the scope of this rulemaking action.” (76 FR 13317) While I personally object to this exclusion as being short sighted, I understand the reason that PHMSA has taken this position. After all, Congress has given TSA responsibility for transportation related security issues (while not providing the resources necessary to enforce those requirements, but that’s another issue). Furthermore, since the described hazmat transfers, by definition, take place at facilities, the highest risk facilities are already supposed to consider security risks under the CFATS program.

Having said that; PHMSA does briefly address at least one security issue in their discussion of the types of things that the risk assessment should address. In the preamble where they discuss the conditions that might affect the safety of the transfer operation PHMSA lists the following items that should be addressed in the risk assessment: “access control [emphasis added], lighting, ignition sources, physical obstructions, and weather condition” (76 FR 13320). ‘Access control’ is certainly a security issue.

Access Control and Transfer Operations

While not specifically outlined in the NPRM, there are two different types of access control that will need to be addressed in the risk assessment and the subsequent operating procedures. The first deals with the access of the tank truck to the facility and the second deals with the access of the driver. While the two would seem to be intimately intertwined, they need to be considered separately.

The truck entering the facility for transfer operations needs to be confirmed to be the truck and trailer that were supposed to be sent to the facility for that particular transfer. First, since the carrier was supposed to ensure that a risk assessment was done on the trailer before it arrived at the facility to ensure that it is safe for the load that it is to carry, it is important to check that the arriving tank wagon is the one upon which that risk assessment was done.

This can only be done by having the carrier independently communicate to the facility the identity of the tank wagon destined for a particular load. The person who clears the vehicle to enter the facility needs to have a listing of vehicles that will be coming into the facility with appropriate identifying information. That information needs to be in the hands of the facility before the vehicle arrives.

Before the transfer operations begin the vehicle needs to be checked to ensure that there is no new damage to the vehicle that would negate the previously done risk assessment. High-risk chemical facilities will also check to ensure that there are no improvised explosive devices placed on the vehicle. While this is essentially a requirement for high-risk facilities, all facilities should do at least a cursory inspection for this type of risk.

Access control for the driver of the vehicle is as important as checking the identity of the vehicle. Particularly where a driver is part of the transfer operations, the facility needs to insure that the driver is one who is appropriately trained in those operations as required by this NPRM. This is most easily done by requiring that the carrier provide certification on their vehicle notification that the identified driver is trained in accordance with the provisions of §172.704(a)(2)(iii) outlined in this NPRM.

High-risk chemical facilities will have additional requirements under the personnel surety requirement of RBPS #12. For drivers transferring the hazmat loads described in this NPRM will be required to have a Hazmat endorsement on their CDL, which requires a background check conducted by the TSA. Verifying the driver’s identity against that document and against the listed driver on the vehicle notification document should satisfy the RBPS #12 requirements.

Carrier Transfer Operations

Where the carrier has sole responsibility for transfer operations at a facility, there is still a facility responsibility for controlling access to their property. In fact, since the facility is surrendering control of the safety of the transfer operation to the carrier, it is probably more important to put strong access controls in place to ensure that at least the right person and the right vehicle are involved in the process.

Saturday, March 19, 2011

On Thursday Rep. Langevin (D, RI) introduced HR 1136, the Executive Cyberspace Coordination Act of 2011. This bill, like most cybersecurity legislation introduced to date, deals principally with the security of Federal electronic information systems. It does, however, provide authority for the regulation of private sector information systems that support industrial control systems in critical infrastructure.

Federal Information System Security

This bill would provide a “comprehensive framework for ensuring the effectiveness of information security controls over information resources that support Federal operations and assets” {§3551(1)}. It would establish the Office for Cyberspace within the Executive Office of the President headed by a Director that would be appointed by the President with the consent of the Senate. The Director would serve as a member of the National Security Council. The bill would also require the President to appoint a Federal Chief Technology Officer (Federal CTO) in a separate Office of the Federal Chief Technology Officer within the Executive Office.

Within the Office for Cyberspace the legislation would also create the Federal Cybersecurity Practice Board. The Board would be chaired by the Director and would include representatives of various Federal agencies including OMB, DOD and the Federal law enforcement community. This Board would “be responsible for developing and periodically updating information security policies and procedures” {§3554(c)(1)} for protecting the Federal government’s information technology systems.

The Director is also given the responsibility to “review and offer a non-binding approval or disapproval of each agency’s annual budget to each such agency before the submission of such budget by the head of the agency to the Office of Management and Budget” {§3555(c)(2)}. Lacking actual budget control authority the Director would act more as an advisor than a controller of the security of Federal information technology systems.

This lack of real control authority is reflected in the specific requirement for each agency to “develop, document, and implement an agencywide [sic] information security program” {§3556(b)}. Additionally, the Secretary of Commerce (in consultation with the DHS Secretary) is given broad authority to “promulgate information security standards pertaining to Federal information systems” {§3557(a)(1)(A)}.

Critical Infrastructure

The last 2½ pages of this bill address cybersecurity for critical infrastructure. The entire Title III of this bill relies upon one of the most sweeping definitions of ‘critical information infrastructure’ that I have ever seen. Section 301(1) of the bill states:

“The term ‘critical information infrastructure’ means the electronic information and communications systems, software, and assets that control, protect, process, transmit, receive, program, or store information in any form, including data, voice, and video, relied upon by critical infrastructure, industrial control systems such as supervisory control and data acquisition systems, and programmable logic controllers. This shall also include such systems of the Federal Government.”

I hate to be an English Nerd, but the comma behind the words ‘critical infrastructure’ means that any industrial control system or programmable logic controllers residing on, or being supported by, an electronic network of any sort makes that network a piece of ‘critical information infrastructure’. Taken to its logical extreme, this definition would include the electronics system in every modern automobile.

Having established a very expansive scope of the potentially regulated community this Title then provides the Secretary of Homeland Security the primary authority “in creation, verification, and enforcement of measures with respect to the protection of critical information infrastructure, including promulgating risk-informed information security practices and standards applicable to critical information infrastructures that are not owned by or under the direct control of the Federal Government” {§302(a)}.

This broad authority is tempered only by the requirement to coordinate with ‘sector specific regulatory agencies’ “in establishing [those] enforcement mechanisms” {§302(b)(2)}. Of course, DHS is that regulatory agency for a number of sectors including the chemical sector.

The only saving grace is that the scope and authority is so wide and all encompassing as to be practically meaningless. Any attempt to establish cybersecurity regulations under this authority would be tied up in court so fast that thousands of lawyers would get rich on the billable hours on these cases alone. Besides, there are no provisions in this legislation for establishing an agency within DHS to exercise this authority, or giving an existing agency that authority. So practically speaking, there is no one to write the regulations for industry to object to.

I expect that if this bill goes anywhere, that there will be substantial revisions to Title III.

Friday, March 18, 2011

Thanks to John C.W. Bennett for keeping me on track with TWIC issues. He caught me out in a rookie error in today’s blog about HR 1105. He reminded me that; “Actually the TWIC Reader rule will be a Coast Guard rulemaking--they released the ANPRM a while back. TSA is only running the Pilot Study.”

TWIC is a strange program with shared responsibilities between TSA and the Coast Guard. Since the TWIC is part of the MTSA program it is administered by the Coast Guard and they write the regulations regarding how the TWIC is used. As Mr. Bennett reminded me, this includes the writing of the pending TWIC Reader rule.

The TSA, on the other hand deals with many of the technical details of making the darn things work. This includes conducting the TWIC Reader evaluation.

Earlier this week Rep. Thompson (D, MS), along with four fellow Democrats on the House Homeland Security Committee, introduced HR 1105, the Transitioning With an Improved Credential (TWIC) Program Act. This bill would extend the approaching expiration of many Transportation Worker Identification Credentials (TWIC) until December 31st, 2014 or when DHS implements a TWIC Reader regulation, which ever comes first.

The TWIC Readers are devices that would allow for biometric verification of the identity of the person issued the card. Until those readers are approved by TSA and the regulations put into place regarding the use of those readers the full value of the TWIC program will not be realized. TSA recently completed a field study of the use of a variety of different TWIC Readers and the report on that study should be sent to Congress in the near future. This study was one of the prerequisites to being able to formulate the TWIC Reader regulations.

The first of the TWICs issued are set to begin expiring next year. In a press release dated yesterday, Rep. Thompson, the Ranking Member of the Committee, said:

“Hard working transportation workers shouldn’t have to go through the time and expense of renewing their TWICs if DHS doesn’t even have a finalized plan for deployment of the readers. My bill will address this unique problem by delaying the necessary renewal of these expensive cards until DHS issues the final reader rule or December 31, 2014, whichever is earlier.”

Readers of this blog should recall that TSA currently plans to issue their notice of proposed rule making (NPRM) for the TWIC Reader rule in November of this year. This should allow for the issuance of the final rule before the December 2014th date mentioned in this bill.

Thursday, March 17, 2011

The news reporting on recent earthquake and resulting tsunami in Japan has focused on the nuclear facility, but there have been intermittent reports about the effects of the devastation on chemical facilities, particularly some refineries in the affected area. Because of the higher profile radiation risk there has been essentially no discussion of the inevitable chemical releases that must have happened in the area.

Protective Measures

Incidents like this one provide an interesting starting point for the discussion of the necessity for planning for emergencies. While it is obvious that facilities in known earthquake zones, for instance, need to plan for earthquakes, the thing that needs to be discussed is how strong an earthquake actually needs to be planned for. Similar debates need to be undertaken for other natural disasters as well.

Presumably a facility could be hardened enough to provide protection against a 9.0 earthquake. I’m not sure how expensive it would be to protect a facility against that level of physical threat, but I’m sure that it would cost so much that the facility would never get built. While I’m sure that there are those that would argue against accepting this level of risk, I think that most people would at least theoretically accept that an event that happened only 5 times in the last 100+ years around the entire world, is one that can be ignored from a prevention planning point of view. But at some point, a level of earthquake threat must be addressed in the planning process; facilities in earthquake prone areas must be able to withstand some level of earthquake shaking without significant chemical release.

Having said that for natural disasters, wouldn’t it be practical to say that man-made incidents like terrorist attacks should be addressed the same way? Shouldn’t low frequency events be ignored in the development of facility security? After all, we haven’t had a terrorist attack on a chemical facility in the United States, so why should a facility spend a great deal of money on security measures to prevent such an attack.

There are a couple of fundamental differences between an earthquake and a terrorist attack. First off, an earthquake of this magnitude is essentially a random event with a long time lag between occurrences. The probability of one occurring at any particular place in any given time frame is vanishingly small and the probability cannot be influenced by actions taken by facility management.

Terrorist attacks, on the other hand are human directed and are anything but random. They may be difficult to predict, but that is because the perpetrators take efforts to conceal their intentions. Classes of targets, however, can be identified through an analysis of past actions, public pronouncements and infiltration activities by intelligence agencies. More importantly, taking preventive measures makes a potential target less appealing because of a perception of a reduced probability of success.

Emergency Response Planning

While it is not practical to harden all high-risk facilities against a 9.0 earthquake, the same cannot be said for emergency planning for a response to the effects of such an earthquake. The military long ago learned that contingency planning is a relatively low cost exercise and can be written off as a training expense.

It seems that practicing the efforts that go into planning for an emergency response enhance the emergency response capability of those doing the practice. Just the act of identifying the various problems that could occur and thinking about what to do if they occur make it less likely that something can happen that hasn’t been thought of.

For example, if the management team of this nuclear facility in Japan had thought about the consequences of a strong-off shore earthquake they would have realized that a large tsunami would an expected occurrence. A large tsunami would inevitably destroy power transmission over a wide area and would result in prolonged power outages. The consequences of such a long term power outage should have been easy to predict. Some advance planning may have been able to come up with alternative cooling techniques that could have prevented at least some of the problems that we are seeing today.

The same sort of process needs to be undertaken for high-risk chemical facilities in regards to potential terrorist attacks. First we have to admit that there is no such thing as perfect protection or security. If a terrorist group is determined enough and skillful enough their attack will succeed; our security measures just reduce the probability of that success. Hopefully we will convince them to attack some where else, if not we must be prepared to respond to that attack.

Once we admit, even just internally, that there is a possibility of an attack succeeding, we must take the next step and plan how we will deal with the consequences of that successful attack. If we think this through in advance, we may be able to come up with measures that will mitigate the effects of such an attack. Measures conceived of and planned for in advance of their need are always easier to implement and usually more likely to succeed than those conceived of in the heat of the moment.

Learn the Lesson

So, as a community, let’s learn the lesson of the Great Japanese Earthquake of 2011. Let us start to think about the unthinkable. What happens if a dedicated team of trained terrorists successfully attacks our high-risk chemical facility? Start to think about that today, while you have the luxury of taking your time in your considerations of what could happen and how you should respond. The luxury of time can rapidly disappear; just ask the management of the Fukushima Power Plant.

The Senate this afternoon voted 87 to 13 to pass HJ Res 48, the latest short-term extension of the current continuing resolution funding the Federal government. This resolution will extend the current CR until April 8th if it is signed, as expected, by President Obama. The resolution includes language that would extend the current authorization for the CFATS program (among others) until April 8th.

Nine of the 13 Nay votes came from Republican Senators, generally reflecting their displeasure at the continuing use of short term funding measures instead completing a funding bill for the remainder of the fiscal year.

Conventional wisdom would claim that this three week extension will provide additional time for congressional leaders to work out a compromise funding program for the remainder of this fiscal year so that Congress can start to work on the FY 2012 budget. Unfortunately the current work plan for the House and Senate includes a recess next week during which little work is expected on the spending plan.

There is an interesting blog posting over on the ChemicalProcessing.com web site dealing with the SSP inspection process describing a presentation made by Richard Cary, an ISCD Regional Commander (for chemical facility security inspectors). That presentation at the recent NPRA Security Conference in Houston, TX provided an overview of what facilities can expect from the SSP inspection process, though it isn’t clear from the blog whether this referred to the pre-authorization inspections (PAI) or the authorization inspection, or perhaps (more likely I expect) both.

Interestingly, the blog post provides a link to the CFATS web site page that deals with the inspection process. Unfortunately that page provides more of an overview of the CFATS process rather than any real information on the inspection process. It would seem to me that this web page would provide a better venue for ISCD to provide detailed inspection planning information to the regulated community.

I have seen similar information posted in other locations, but it is always valuable for facilities to hear what they can expect from the inspection process. It allows them to better prepare for their actual inspections.

One piece of information provided in this blog posting that I haven’t seen elsewhere deals with corporate issues. The blogger (I’m not sure if it’s Ryan Loughin or Steven Partridge, both from ADT) notes that: “One of the first things to know is that DHS will visit your company's headquarters prior to facility inspections. This will help companies with multiple facilities falling under CFATS. The three area commanders will coordinate headquarter visits for companies with sites in more than one region or jurisdiction.”

Not only would this make it easier for management with multiple CFATS sites, but it should make it easier for the CFATS inspectors to check those portions of the security program that are administered at the corporate level. Those could include items like the personnel surety program, customer vetting and order processing.

Wednesday, March 16, 2011

As I mentioned last Friday, PHMSA has finally (this was first addressed in January 2008 – 73 FR 916) published the NPRM for their bulk hazardous material loading and unloading regulations for cargo tank motor vehicles. One of the basic requirements for this new rule is that a risk assessment must be completed for all hazmat tank truck loading and unloading operations.

Who is Responsible

A new section, §177.831, is added to the Hazardous Materials Regulations (HMR) laying new requirements on each “person who loads, unloads, or provides transfer equipment to load or unload a hazardous material to or from a cargo tank motor vehicle”. This statement is clear, but its application in the real world might be a tad more difficult. HAZMAT loading operations are normally conducted by facility employees but the person who actually unloads a tank truck at the receiving location may be a facility employee, an employee of the supplier providing the material, or a truck driver working for a third party.

Generally speaking the hazmat employer responsible for the loading or unloading operations is responsible for conducting the risk assessment. There are confounding factors that may spread some of that responsibility around. Where the hazmat employee conducting the transfer operations is not a facility employee and the facility requires “unique operational procedures” there is a dual responsibility for the assessment. If a facility provides transfer equipment, a hose for example, for a transfer operation conducted by a separate hazmat employer, there is again a dual responsibility for the conduct of the risk assessment.

In fact, even where facility personnel are responsible for transfer operations, “the motor carrier must conduct a risk assessment and develop operating procedures that are specific to the cargo tank involved in the transfer operation” (76 FR 13319).

It seems that the only case where there is not some sort of dual responsibility for conducting the risk assessment is “where the motor carrier is primarily responsible for the safety of the transfer operation, such as at a business or residence”. The preamble explains that this is typically found at gasoline delivery to commercial gas stations or propane deliveries to homes.

Risk Assessment

Section 177.831(a) describes the newly required risk assessment as a “a systematic analysis to identify and evaluate the hazards associated with the specific loading or unloading operation”. It subsequently explains that the “analysis must be appropriate to the complexity of the process and the materials involved in the operation” {§177.831(a)(2)} and then lists three specific areas that should be addressed:

“(i) The characteristics and hazards of the material to be loaded or unloaded;

“(ii) Measures necessary to ensure safe handling of the material, such as temperature or pressure controls; and

“(iii) Conditions that could affect the safety of the loading or unloading operation, including access control, lighting, ignition sources, and physical obstructions.”

For those unloading operations that are entirely the responsibility of the carrier, PHMSA does not intend for this rule to require a location specific risk assessment to be required where large numbers of such locations would make that requirement impractical. They specifically state in the preamble to the NPRM that such carriers would not need to “conduct a separate risk assessment of each residence or retail outlet (i.e., gas station) to which it delivers propane or gasoline, but may instead assess the overall risk of such operations and develop operating procedures that apply generally to such operations” (76 FR 13320).

Equipment to be Assessed

The wording of the description of the equipment that is intended to be included in the safety assessment is very important. The new regulatory language states “including [emphasis added] any device in the loading and unloading system that is designed specifically to transfer product between the internal valve on the cargo tank and the first permanent valve on the supply or receiving equipment (e.g., pumps, piping, hoses, connections, etc.)” {§177.831(a). [NOTE: there is a missing closing parenthesis at the end of this description in the NPRM.]

It is important to remember that the word ‘including’ in this context means that the description that follows is not the only definition of the term; just a common example being used for illustrative purposes. This is an important distinction. Other equipment could also be part required assessment. For example, vent lines allowing for pressure equalization between the cargo tank and the storage tank, gas lines used to pressurize the cargo tank during unloading operations, and loading scales used to gauge the amount of material added to the cargo tank.

Even so, I think the example provided in the current language is overly restrictive. At one facility where I worked we had permanent unloading lines for all of the raw materials that we received into bulk storage tanks. One end of the stainless steel flex lines were attached to facility piping by a flanged connection. At the other end of the flex line was a manual valve. A strict interpretation of the exemplar provided in the would thus limit the area of concern for the assessment to the short section of on-truck piping between the internal valve and the manual valve on the end of the flex line.

Characteristics and Hazards

The term “characteristics and hazards of the material” is going to be very important to the implementation of this regulation. If PHMSA narrowly interprets this terminology to mean those ‘characteristics and hazards’ that are addressed in the HMR then the hazard assessment will be narrowly focused. The resulting assessments will be limited to looking at things like flammability, corrosivity and toxicity; all important characteristics and hazards to be sure.

PHMSA needs to expand the definition of this term to include reactivity and expand it beyond the narrow ‘no water’ definition of reactivity found in the HMR. Numerous unloading incidents occur every year when the wrong material is off-loaded into a storage tank. These reactions can range from the violently exothermic reactions of a strong acid and base; the reactions that explosively release gasses like bleach and aqua ammonia; to the slower but just as catastrophic polymerization reactions that produce enough heat over time to produce uncontrollable decomposition reactions.

These types of reactions are not covered in the HMR because they are a miniscule risk in transit. When these reactions happen in loading operations they are limited in scope because the incompatible material being loaded on-top-of is typically very small in volume relative to the material being loaded. If the catastrophic reaction does take place it seldom moves outside the front gate.

At fixed facilities where material is being added to a storage tank the possibility of sufficient quantity of the ‘other chemical’ being present is greatly increased. The problem is frequently compounded by the fact that the level in the ‘wrong tank’ has not been properly checked and the tank is overfilled. With hazardous chemicals where the storage tank is vented back to the tank wagon to avoid toxic releases that overfilling can force the reacting products back into the tank wagon, expanding the hazard area significantly.

Use of Assessment

The importance of this hazard assessment cannot be over stated. The results of this assessment will become the basis for the development of the operating procedures that will be used to ensure that these transfer operations are conducted in a safe manner. I’ll look at the requirements for those procedures in a future blog post.

About Me

I spent 15 years in the US Army as an Infantry NCO. After getting out of the Army I started working in the chemical industry, getting my BSc Chemistry degree while working as a technician. I spent 12 years working as a process chemist in a specialty chemical company. Most recently I worked as a QA/R&D Manager in a specialty chemical manufacturing facility. Currently I am working as a freelance writer.