Irish/German co-operation in new technologies creating a paradigm shift in the planning of safety for current and future manufacturing systems.

Presence detection is a critical element in the basis of safety for many pharmaceutical and bio pharmaceutical processes. Detecting presence of workers prior to start-up and during operation of machinery and processes is an effective means of injury prevention. Likewise product can be protected from human contamination using collaborative robots allied with relevant 3-D presence detection. The pharmaceutical sector has always had to deploy sophisticated processes and technology in its manufacturing environment while maintaining the highest safety standards.

This is an approach which responds positively to the need for worker safety while minimising production disruption. Process components such as centrifuges and barrel mixers pose a significant risk to workers because of high speed rotational action or agitation. Likewise transportation of storage units such as intermediate bulk containers and the use of automated wrapping and palletising machinery create the need for effective safeguarding. 3D sensing systems provide many advantages through the introduction of barrier-free safeguarding.

SafetyEYE, a 3-D virtual detection system, provides a comprehensive protection zone around such machinery. Developed jointly by the Pilz Software Research and Development team in Cork (IRL) and the Product Development division in Ostfildern (D), the company considers SafetyEYE as an example of new technologies creating a paradigm shift in the planning of safety for current and future manufacturing systems.

Bob Seward, chair of the IOSH Desmond-South Munster Branch, said: “The development of this innovative SafetyEYE technology will make a significant difference in terms of protecting people at work while they operate around machinery danger zones. Our members were very impressed with SafetyEYE and what it can achieve in terms of accident prevention and safeguarding workers.”

The world’s first 3D zone monitoring system SafetyEYE comprises a three-camera sensing device, an analysis unit and programmable control capability.

The sensing unit creates the image data of the zone to be protected and the stereoscopic cameras allow for precise distance and depth perception. Adjusting the height of the camera device allows for varying zone dimensions and areas of coverage. The image data is processed by the analysis unit to detect any intrusion of the defined 3-D protection zone and is relayed to the programmable safety and control system (PSS) for activation of the appropriate safety response.

The avoidance of an obstacle-course of physical guards has obvious advantages for increased freedom of interaction and ergonomics between machinery and humans without compromising safety for both. Because of the highly configurable software a wide range of detection zones can be designed either using pre-defined geometric forms or bespoke shapes. These zones can then be assigned various safety-related actuations with reference to the risk from an audio-visual warning to shut-down.

SafetyEYE can be used to prevent start-up of machinery when persons are in a danger zone or provide warnings and if necessary activate a shutdown if an operator enters a danger zone while such plant is running. The system can be configured to signal a warning as the worker enters the perimeter of the defined safety zone and as he continues further into the zone initiate further safety actions. The machine can remain in this suspended state while the worker completes his task. Once the worker has cleared the area the machine’s activities can resume in accordance with the worker’s egress from the safety zone. This incremental reactive capability allows for minimum downtime and so optimal productivity is maintained. For workers who only encroach on the outer points of the safety zone the triggered warning will uphold the safety integrity of the work space without limiting operation. Likewise, the system can be configured to allow for pre-defined spaces within the protection zone to be breached without shut down. This is especially useful for supervisory personnel who need to access control components which lie within the safety zone. Again they may complete their task safely without the need to disrupt the manufacturing process.

To achieve the same level of safety in such a scenario as this, a whole range of other safety measures may have to be deployed, such as guard-doors, with the physical and visual restrictions these solutions will impose. Safety for workers venturing beyond these guards would then require optical sensors which operate two-dimensionally along a plane and may require a multiplicity of sensors to provide comprehensive monitoring. This mix of solutions can present significant cost implications and their static single-plane positioning will raise costly design challenges. As SafetyEYE is positioned above the manufacturing area it does not present any physical or visual obstruction and it is also far less likely to be interfered with than other ground-level safety measures which are always more vulnerable to intentional or accidental interference. The 3-D zonal capability means that one sensor unit can provide far more safety coverage than the planar sensors. Such imaging-based devices also have a recording functionality so that safety zone breaches can be recorded or production activity monitored to feed into productivity metrics.

These attributes were acknowledged by Bob Seward of the IOSH when presenting Pilz with the award. “With the introduction of this certified technology, safety can no longer be seen as a barrier to work, slowing work down or stopping work. It can be truly integrated in the work system.”

Pilz Ireland managing director John McAuliffe said: “Pilz were honoured to receive this award. The area of safety in which we work is constantly changing and Pilz need to be innovative in order to provide our customers with solutions that achieve safety in lean manufacturing environments.” Providing services from risk assessment, safety design and safety training to customers all over the world the company views continuous development of processes and products, such as SafetyEYE, as vital in meeting the constantly evolving demands of the modern manufacturing environment.

The Association for Packaging and Processing Technologies (PMMI) estimates that 34% of primary pharmaceutical operations in North America by 2018 will be carried out by robots, compared with 21% in 2013. This increasing automation, along with the rapid growth of collaborative robots across all sectors, is heralding a new era of human-robot interaction in manufacturing.

SafetyEYE is especially effective in ensuring the safe deployment of collaborative robots which are ideal for handling materials and ingredients in a decontaminated environment but which require some level of interaction with operators who need to approach to carry out supervisory, control or intervention tasks.

Such are the potential production efficiencies brought about by collaborative robotics in the bulk pharmaceutical manufacturing sector that Health and Safety managers, engineers and suppliers will need to align their safety strategy in line with this new industrial environment.

As with all new technologies care and due process must be exercised in the integration with other plant and machinery. Structured risk assessment considering the specific hazards leading to intelligent safety concepts are the key to successful adoption of such new technologies. Pilz is pioneering safe automation with the continuous development of its services and products, such as SafetyEYE, ensuring that its customers can anticipate the safety challenges presented by industry developments such as collaborative robots.

Decisions concerning the acquisition of occupational safety monitoring instrumentation are often made by operational staff that may not have visibility of the full financial implications of their choices. This article, by James Carlyle of Ashtead Technology, examines the factors affecting these decisions and explain why a strategic decision to hire instrumentation can deliver substantial and wide-ranging advantages.

Background
The Management of Health and Safety at Work Regulations 1999 (originally introduced in Britain 1993 in response to an EU Directive) require employers and self-employed people ‘to carry out a suitable and sufficient assessment of the risks for all work activities for the purpose of deciding what measures are necessary for safety.’ However, the risks arising from toxic gases, dust, explosive mixtures and oxygen depletion can be complex and constantly changing. So, in addition to an initial risk assessment, ongoing monitoring is often necessary to ensure the protection of staff and others.

Employers may choose to conduct their own testing and monitoring, or they may prefer to employ the services of professional consultants to conduct the risk assessments. Either way, the employer of the consultant has to decide whether to purchase the instrumentation or to rent it.

The risks
Before examining the ways in which testing and monitoring should be undertaken, it is first necessary to consider the risks that need to be assessed.

Fire and/or an explosion can result from an excess of oxygen in the atmosphere, for example, from an oxygen cylinder leak, or an explosion may occur from the ignition of airborne flammable contaminants that may have arisen from a leak or spillage from nearby processes.

Toxic gas detection

Toxic gases, fumes or vapours may also arise from leaks and spills, or from disturbed deposits or cleaning processes. Gases and fumes can accumulate in confined spaces such as sewers, manholes and contaminated ground. They can also build up in confined workspaces for welding, flame cutting, lead lining, brush and spray painting, or moulding using glass reinforced plastics, use of adhesives or solvents. Carbon monoxide, particulates and hydrocarbons may also become a problem in situations where the products of combustion are not exhausted adequately. Plant failure can also create gaseous hazards. For example, ammonia levels may increase if refrigeration plant fails or carbon dioxide may accumulate in some pub cellars following leaks from compressed gas cylinders.

Oxygen depletion in workplace air can cause headaches, breathlessness, confusion, fainting and even death. There are many situations in which this can occur; for example:

Workers breathing in confined spaces where replacement air is inadequate

Displacement of air during pipe freezing, for example, with liquid nitrogen

Purging of a confined space with an inert gas to remove flammable or toxic gas, fume, vapour or aerosols

TSI Dustrak

The COSHH definition of a substance hazardous to health includes dust of any kind when present at a concentration in air equal to or greater than 10 mg/m3 8-hour TWA of inhalable dust or 4 mg/m3 8-hour TWA of respirable dust. This means that any dust will be subject to COSHH if people are exposed above these levels. Some dusts have been assigned specific Workplace Exposure Limits (WELs) and exposure to these must comply with the appropriate limit.

Most industrial dusts contain particles with a wide range of size, mass and chemical composition. As a result, their effects on human health vary greatly. However, the Health & Safety Executive (HSE) distinguishes two size fractions for limit-setting purposes termed ‘inhalable’ and ‘respirable’.

Inhalable dust approximates to the fraction of airborne material that enters the nose and mouth during breathing and is therefore available for deposition in the respiratory tract. Respirable dust approximates to the fraction that penetrates to the gaseous exchange region of the lungs. Where dusts contain components that have their own assigned WEL, all the relevant limits should be complied with.

The financial justification for instrument hire
For most of us, when we need something, assuming funds are available, we buy it. At Ashtead Technology, we challenge that assumption; unless the required instrument is either very low cost or likely to be deployed on a frequent basis, it rarely makes sense to purchase the equipment. There are many reasons for this, but the most important is of course financial, however, operational staff are not always aware of the full cost of purchase, because the detail is hidden in the company’s accounts.

Capital purchases are generally written off in the company accounts over a 3, 4 or 5 year period. This means that the cost of ownership is at least 20% of the capital cost per year and possibly over 33%. However, there are of course other costs of ownership – most instruments require regular maintenance and calibration which itself involves further costs both in terms of materials and labour. A gas analyser, for example, would require calibration gases and associated valves and safety equipment; trained staff would be required to ensure that the instrument is calibrated correctly, and consumables such as filters and replacement gases would be required. The same issues arise with other types of instrumentation; all of which require maintenance by suitably trained and qualified staff. Consequently, the annual cost of instrument ownership can easily exceed 50% of the purchase cost.

Another significant financial cost is the ‘opportunity cost’ of the money that is tied up in a purchase; capital expenditure on equipment represents money that could have been used for other purposes – for investing in raw materials, staff, training, marketing, new premises etc. Alternatively that money could have been invested and delivered a return.

In addition to the financial justification, there are many more reasons to hire…

Renting provides appropriate technology
Once an instrument is purchased, the company is committed to that technology for the next few years and this can be a major disadvantage. For example, if a company purchases a PID gas detector for the measurement of solvents, it may find later that there is also a requirement to monitor methane, and the PID would not be suitable for this, so a second analyser would be necessary; an FID for example. Similarly, the company may discover at a later date that solvent speciation is necessary, which again, the PID would fail to achieve.

The same principle applies to other applications. For example, if a basic infrared camera is purchased and it later transpires that higher resolution images are required, a second more expensive camera would be necessary.

From a corporate perspective, instrument purchase can have negative implications because instruments are often shared amongst different departments and between different sites. However, it is unlikely that one technology or one particular instrument is able to meet everybody’s needs, so it is likely that each person will seek to acquire their own instrument; firstly to ensure that they get the kit that they need, but also so that their access to instrumentation is not limited because it is in use elsewhere. If each person is allowed to purchase their own kit; whilst this might be an extremely costly option, it does at least encourage ‘ownership’ so that the equipment is properly maintained. In contrast, shared ownership often results in poor maintenance because none of the staff take responsibility for ensuring that the equipment is serviced and maintained correctly.

Renting instrumentation ensures that all staff have continual access to a range of different technologies, so they do not have to ‘make do’ with whatever happens to be available at the time they need it. If a company has purchased an instrument, its staff are more likely to use it ‘because it is there’ rather than because it is the most appropriate technology.

Renting provides access to new technology
One of the problems with buying an instrument is that your technology is then stuck in a moment of time; inevitably new instruments are developed that are better than their predecessors, but once an instrument has been purchased it is not possible to take advantage of new technology. In contrast, with the benefits of scale, Ashtead is able to continually invest in new technology so that the rental fleet provides access to the latest technology and customers are therefore able to choose the instruments that best meet their needs.

Renting eliminates storage and maintenance costs
One of the common features of all instruments is that they require regular maintenance and in many cases calibration. This is often a skilled activity that requires training and appropriate equipment. Ashtead Technology’s engineers are therefore equipped with all of the necessary equipment to service and maintain every instrument in the rental fleet. They are also trained by manufacturers, so that all instruments can be delivered tested and ready for immediate use. Storage can also represent a cost for the larger pieces of equipment, especially if it is not possible to store the instruments in the same location as the main users.

Technical support from rental companies
Instrumentation is constantly evolving; newer instruments are usually more accurate, more sensitive, faster, lighter, and easier to use. However, the array of instruments available can be bewildering so it is often helpful to discuss options with an Ashtead Technology engineer before making a choice, and then after the instrument is delivered, many customers value telephone support during the setup and operation of the instrument.

Summary
The basic premise behind Ashtead Technology’s business is an intense focus on providing customers with exactly the right equipment at the precise moment that they need it. We therefore seek to become our clients’ instrumentation partner; saving them time and money, and ensuring that they always have access to the best available technologies. This is achieved by:

Continually searching the market, looking for the best technologies from the world’s leading suppliers

Utilising expert knowledge and buying power to ensure that our fleet of instruments includes a broad selection of the best available technologies

Manufacturer training for our engineers

Investing in the equipment, spares and consumables for servicing, calibrating and maintaining the entire instrumentation fleet

These annual technical division symposia bring together innovators, thought leaders and other automation and control professionals around the world to explore and discuss the latest technologies, practices and trends, and gain high-value, peer-reviewed technical content across a wide variety of automation fields and disciplines.

Another innovation is the holding of one of these symposiums in Europe. The 60th ISA International Instrumentation Symposium will be held in England (London) in June (23-27 June 2014). This is possible the first time one of the main-stream seminars has been held outside of continental North America.

We notice that the 9th Sales & Marketing Summit will be held on-line. Again this is a first from the ISA for one of their ‘main-line’ conferences. The dates are 9-11 September 2014.

Moving all or part of SCADA applications to the cloud can cut costs significantly while dramatically increasing reliability and scalability, says Larry Combs, vice president of customer service and support, InduSoft.

Although cloud computing is becoming more common, it’s relatively new for SCADA (supervisory control and data acquisition) applications. Cloud computing provides convenient, on-demand network access to a shared pool of configurable computing resources including networks, servers, storage, applications, and services. These resources can be rapidly provisioned and released with minimal management effort or service provider interaction.

By moving to a cloud-based environment, SCADA providers and users can significantly reduce costs, achieve greater reliability, and enhance functionality. In addition to eliminating the expenses and problems related to the hardware layer of IT infrastructure, cloud-based SCADA enables users to view data on devices like smartphones and tablet computers, and also through SMS text messages and e-mail.

Our company (InduSoft), along with a number of others, provides SCADA software and services for firms that want to use their own IT infrastructure, the cloud, or a combination of both to deploy their applications. We provide upfront consulting and advice to help customers make the best choice depending on their specific requirements and capabilities.

A cloud can be public or private. A public cloud infrastructure is owned by an organization and sold as services to the public. A private cloud infrastructure is operated solely for a specific customer. It may be managed by the customer or by a third party; it may exist on premise or off premise. Hybrid clouds consist of private and public clouds that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability.

Cloud computing can support SCADA applications in two fashions:

The SCADA application is running on-site, directly connected to the control network and delivering information to the cloud where it can be stored and disseminated, or

The SCADA application is running entirely in the cloud and remotely connected to the control network.

Figure 1: A public cloud formation in which the SCADA system is running onsite and delivers data via the cloud

The first method is by far the most common and is illustrated in Figure 1 (right). The control functions of the SCADA application are entirely isolated to the control network. However, the SCADA application is connected to a service in the cloud that provides visualization, reporting, and access to remote users. These applications are commonly implemented using public cloud infrastructures.

The implementation illustrated in Figure 2 (below) is common to distributed SCADA applications where a single, local SCADA deployment is not practical. The controllers are connected via WAN links to the SCADA application running entirely in the cloud. These applications are commonly implemented using private or hybrid cloud architectures.

Service Choices
Most experts divide the services offered by cloud computing into three categories: infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS).

Figure 2: A private/hybrid cloud in which the controllers are connected via WAN links to the SCADA application running entirely in the cloud.

An IaaS such as Amazon Web Services is the most mature and widespread service model. IaaS enables service provider customers to deploy and run off-the-shelf SCADA software as they would on their own IT infrastructure. IaaS provides on-demand provisioning of virtual servers, storage, networks, and other fundamental computing resources.

Users only pay for capacity used, and can bring additional capacity online as necessary. Consumers don’t manage or control the underlying cloud infrastructure but maintain control over operating systems, storage, deployed applications, and select networking components such as host firewalls.

PaaS, like Microsoft’s Azure or Google Apps, is a set of software and product development tools hosted on the provider’s infrastructure. Developers use these tools to create applications over the Internet. Users don’t manage or control the underlying cloud infrastructure but have control over the deployed applications and application hosting environment configurations. PaaS is used by consumers who develop their own SCADA software and want a common off-the-shelf development and runtime platform.

SaaS, like web-based e-mail, affords consumers the capability to use a provider’s applications running on a cloud infrastructure from various client devices through a thin client interface like a web browser. Consumers don’t manage or control the underlying cloud infrastructure but instead simply pay a fee for use of the application.

SCADA vendors have been slow to adopt the SaaS service model for their core applications. This may change as the uncertainty of cloud computing begins to clear. For now, vendors are beginning to release only certain SCADA application components and functions as SaaS, such as visualization and historical reporting.

Economical Scalability
With all three service models, scalability is dynamic and inexpensive because it doesn’t involve the purchase, deployment, and configuration of new servers and software. If more computing power or data storage is needed, users simply pay on an as-needed basis.

Companies don’t have to purchase redundant hardware and software licenses or create disaster recovery sites they may never use. Instead they can provision new resources on demand when and if they need them. Add in the costs that a company would otherwise incur to manage an IT infrastructure, and the savings of moving to the cloud could be huge.

Instead of numerous servers and backups in different geographic locations, the cloud offers its own redundancy. On-demand resource capacity can be used for better resilience when facing increased service demands or distributed denial of service attacks, and for quicker recovery from serious incidents. The scalability of cloud computing facilities offers greater availability. Companies can provision large data servers for online historical databases, but only pay for the storage they’re using.

Building an IT infrastructure is usually a long-term commitment. Systems can take months to purchase, install, configure, and test. Equivalent cloud resources can be running in as little as a few minutes, and on-demand resources allow for trial-and-error testing.

The ability to easily switch back to a previous configuration makes it easier to make changes without having to start from scratch by taking a snapshot of a known working configuration. If a problem occurs when deploying a patch or update, the user can easily switch back to the previous configuration.

On-site IT projects involve significant cost, resources, and long timelines—and thus include significant risk of failure. Cloud computing deployments can be completed in a few hours with little or no financial and resource commitments, and therefore are much less risky.

A traditional IT infrastructure environment poses the risk that both the primary and the single backup server could fail, leading to complete system failure. In the cloud environment, if one of the cloud computing nodes fails, other nodes take over the function of the failed cloud computing node without a blip.

If a company chooses to implement its own IT infrastructure, access to user data in this infrastructure generally depends on the company’s single Internet provider. If that provider experiences an outage, then users don’t have remote access to the SCADA application. Cloud computing providers have multiple, redundant Internet connections. If users have Internet access, they have access to the SCADA application.

The backup and recovery policies and procedures of a cloud service may be superior to those of a single company’s IT infrastructure, and if copies are maintained in diverse geographic locations as with most cloud providers, may be more robust. Data maintained within a cloud is easily accessible, faster to restore, and often more reliable. Updates and patches are distributed in real time without any user intervention. This saves time and improves system safety by enabling patches to be implemented very quickly.

Challenges and Risks
Cloud computing has many advantages over the traditional IT model. However, some concerns exist in regard to security and other issues. Data stored in the cloud typically resides in a shared environment. Migrating to a public cloud requires a transfer of control to the cloud provider of information as well as system components that were previously under the organization’s direct control. Organizations moving sensitive data into the cloud must therefore determine how these data are to be controlled and kept secure.

Applications and data may face increased risk from network threats that were previously defended against at the perimeter of the organization’s intranet, and from new threats that target exposed interfaces.

Access to organizational data and resources could be exposed inadvertently to other subscribers through a configuration or software error. An attacker could also pose as a subscriber to exploit vulnerabilities from within the cloud environment to gain unauthorized access. Botnets have also been used to launch denial of service attacks against cloud infrastructure providers.

Having to share an infrastructure with unknown outside parties can be a major drawback for some applications, and requires a high level of assurance for the strength of the security mechanisms used for logical separation.

Ultimately to make the whole idea workable, users must trust in the long-term stability of the cloud provider and must trust the cloud provider to be fair in terms of pricing and other contractual matters. Because the cloud provider controls the data to some extent in many implementations, particularly SaaS, it can exert leverage over customers if it chooses to do so.

As with any new technology, these issues must be addressed. But if the correct service model (IaaS, PaaS, or SaaS) and the right provider are selected, the payback can far outweigh the risks and challenges. The cloud’s implementation speed and ability to scale up or down quickly means businesses can react much faster to changing requirements.

The cloud is creating a revolution in SCADA system architecture because it provides very high redundancy, virtually unlimited data storage, and worldwide data access—all at very low cost.

Remote SCADA with Local HMI Look and Feel
Vipond Controls in Calgary provides control system and SCADA solutions to the oil and gas industry, including Bellatrix Exploration. To keep up with customer demand for faster remote data access, Vipond developed iSCADA as a service to deliver a high-performance SCADA experience for each client.

One of the greatest challenges in developing iSCADA was the state of the Internet itself as protocols and web browsers weren’t designed for real-time data and control. Common complaints of previous Internet-based SCADA system users included having to submit then wait, or pressing update or refresh buttons to show new data.

Many systems relied only on web-based technologies to deliver real-time data. Because the HTTP protocol was never designed for real-time control, these systems were always lacking and frustrating to use whenever an operator wanted to change a setpoint or view a process trend.
Users were asking for an Internet-based SCADA system with a local HMI look and feel, and that became the goal of Vipond Controls. This goal was reached with iSCADA as a service by giving each customer an individual virtual machine within Vipond’s server cloud.

All data is now kept safe and independent of other machines running in the cloud. A hypervisor allows multiple operating systems or guests to run concurrently on a host computer, and to manage the execution of the guest operating systems. The hypervisors are highly available and portable, so in the event of a server failure, the virtual machine can be restarted on another hypervisor within minutes.

All the SCADA software runs within the virtual machine, and users are offered a high degree of personal customization. Customers can connect directly to on-site controllers, and Vipond can also make changes to controllers and troubleshoot process problems.

This cloud-based SCADA solution can reduce end-user costs up to 90% over a traditional SCADA system, thanks to the provision of a third-party managed service and the reduction of investment required for IT and SCADA integration, development, hardware, and software.

Thoughts of an automation thought leader stimulated by the events at the Fukushima nuclear power station in the wake of the tsunami in 2011.

Lipták, a patriotic Hungarian by birth, has done much to pass on his considerable knowledge to the next generation of automation professionals. He is a worthy recipient of the ISA’s Life Achievement Award for his history of dedication to the instrumentation, systems, and automation community as evidenced by his teachings, writings, and inventions. He has published over 200 technical articles and has written 34 technical books, including four editions of the multi-volume Instrument Engineer’s Handbook.

He has published a series of articles through the years on the role of automation and more specifically how its correct application might have have prevented some of the sometimes fatal and always catastrophic nuclear disasters that have occurred down the years. Events such as that at Three Mile Island (USA) in 1979 or in Chernobyl (Former USSR) in 1986.

Of course the most recent such incident is the Fukushima Power Station irrepairably damaged after the major earthquake and subsequent tsunami in North East Japan. All these studies have been published in Control Global, or its sister publication and normally we would just put a link on our news pages (and indeed may have in some cases at the time of publication). This time we are using this method because he has published several articles at different times continuing his thoughts on this major and still alarming disaster.

To help manufacturers and plant and facility operators improve their cybersecurity defenses and better confront the growing dangers of cyberwarfare, the International Society of Automation (ISA) has produced the ISA Cybersecurity Tech Pack.

“The ISA Cybersecurity Tech Pack is an assembly of the latest technical papers, PowerPoint presentations, technical books and InTech articles developed by some of the world’s leading experts in cybersecurity and industrial automation and control systems security,” says Susan Colwell, manager of publications development at ISA. “These materials—which can be downloaded from the ISA website—include the latest cybersecurity strategies, recommendations and tools that can immediately be applied to protect your industrial control systems and process control networks.”

As a widely recognized, world leader in cybersecurity standards development, training and educational resources, ISA provides the proven technical expertise and know-how to help safeguard industrial automation and control systems.

For instance, the ANSI/ISA99 (IEC 62433), Industrial Automation and Control Systems Security standards—developed by a cross-section of international cybersecurity subject-matter experts from industry, government and academia—represent a comprehensive approach to cybersecurity in all industry sectors. ISA and its sister organization, the Automation Federation, is currently assisting the Obama administration and US federal agency officials develop the initial version of a national cybersecurity framework—as called for by President Obama in February of this year.

This paper, by Gasmet’s Antti Heikkilä, describes how sophisticated gas analysis is being used to check these cargo containers, but this is just one example of the advantages that are available from an analytical technology that can measure almost any gas.Antti Heikkilä (right) is a senior manager at Gasmet Europe Oy, specialising in developing new applications for the Gasmet FTIR gas analyzers. He holds a MSc degree in Physical Chemistry and has 14 years’ expertise in FTIR spectrometry and quantitative gas analysis, working for the University of Helsinki and Gasmet Technologies group.

Introduction
Entry to freight containers represents a significant hazard to staff responsible for inspection, stuffing or destuffing because of the large number of airborne chemicals that can be present. Research in Germany and the Netherlands found hazardous levels of gases and vapours in around 20% of all containers and this level of contamination is now accepted as commonplace.

Container testing!

It is therefore necessary to examine containers before entry and this work is usually conducted with a wide variety of gas detection techniques in order to be able to assess, individually, all of the substances of greatest concern. However, a Dutch firm of health and safety consultants, Reaktie, has employed FTIR (Fourier Transform Infra Red) gas analysis to dramatically improve the speed and effectiveness with which containers are assessed, because this technology enables the simultaneous measurement of the 50 gases of most concern.

Chemical Hazards
There are two potential sources of hazardous chemicals inside cargo containers; fumigants and chemicals that arise from the goods or packing materials.

Fumigants are applied to goods to control pests and micro-organisms. Cargoes most likely to have been fumigated include foodstuffs, leather goods, handicrafts, textiles, timber or cane furniture, luxury vehicles and cargo in timber cases or on timber pallets from Asia.

According to the IMO’s international regulations, ‘Recommendations on the safe use of pesticides in ships’, fumigated containers and ship cargoes must be labelled giving specifications about dates of fumigation and the fumigation gas used. Furthermore, appropriate certificates are necessary and these records have to be forwarded to the Port Health Authorities without their explicitly asking for them. However, absence of marking cannot be taken to mean fumigants are not present. Containers marked as having been ventilated after fumigation may also contain fumigant that was absorbed by the cargo and released during transit. There is also concern that fumigants may be retained in the goods and subsequently present a hazard to logistics providers, retail staff and consumers.

Common fumigants include Chloropicrine, Methyl bromide, Ethylene dibromide, Sulfuryl fluoride and Phospine. However, with over 20 years of experience testing gases in containers, Peter Broersma from Reakti says “While the fumigants are highly toxic, the number of containers exceeding occupational exposure limits (OEL) due to other chemicals is much greater and the number of ‘failed’ containers is likely to rise as more containers are tested, detection methods improve and new gases are identified.”

Containers often travel for extended periods and experience a wide range of temperatures. It is therefore not surprising that unsafe levels of gases should accumulate in the confined space of a container. Peter identifies the typical sources of gases over their OELs as follows:

Formaldehyde found in cheap furniture (Plywood,MDF etc.) but also in used pallets and lashing materials

Solvents and formaldehyde from poly-resin products

Carbon monoxide from charcoal and natural products

Carbon dioxide from natural products

Ethylene oxide from medical equipment sterilised with ethylene oxide

Solvents including Benzene, Toluene, Ethylbenzene and Xylene (BTEX) in Christmas and decoration products

Flammable gases from disposable lighters

Ammonia in household equipment with Bakelite parts

Volatile Organic Compounds (VOCs) from fire blocks

Pentanes and hexanes from consumer electronics

Phosphine/arsine from natural minerals such as ferrosilicon

Inspection procedures
Major ports have strict regulations in place to protect against potential hazards in cargo containers. In general terms, every incoming stream of products has to be checked for dangerous gases and if one of more gases are detected during the preliminary investigation, all of the containers from this specific producer must be checked. If no gases are detected, it may be possible to only conduct random tests a few times per year. If it is necessary for Customs staff to enter a container, all containers must first be tested and if necessary de-gassed.

Gas detection
Since there are a large number of gases that might be present inside a container, the traditional approach to monitoring has been either to employ a wide range of instruments or to use chemical stain tubes for the most common gases, or a combination of both.

Chemical stain tubes provide a colorimetric assessment of an individual gas, typically with an accuracy of +/- 15%. Different tubes are available for many gases and results can be obtained between 5 seconds and 15 minutes depending on the test. Once a result has been obtained, the tube itself is hazardous waste and must be disposed of. Historically stain tubes have been popular because the cost per test is low. However, the number of tubes that have to be employed in order to demonstrate that a container is safe can be prohibitively expensive and time-consuming to employ.

Instrumental gas analyzers such as electrochemical sensors, that measure either a single gas or a small number of gases impart a similar level of risk to stain tubes because of the possibility of missing or failing to measure a harmful gas. Deploying multiple instruments also presents practical problems because each will require maintenance and re-calibration in addition to a power source or re-charging. Nevertheless, Reaktie for example, would normally conduct a preliminary assessment with a PID gas detector for total VOCs; an LEL combustible gas sensor and handheld electrochemical sensors might be employed for toxic gases such as carbon monoxide, phosphine, ammonia and ethylene oxide. An FTIR analyser would then be employed to measure 50 target gases simultaneously in a test that would take approximately 3 minutes. This ability to measure compounds individually is important because, for example, whilst a PID gas detector measures total VOCs, it does not provide an individual value for, say, benzene, which is a known carcinogen.

One of the potential problems with electrochemical sensors is their inability to cope with high concentrations in a sample gas. This can result in poisoning of the cell, which would normally result in instrument failure. In contrast, similar high concentrations do not harm FTIR, and the instrument can recommence analysis after a few minutes of backflushing.

Gasmet DX4040

Peter Broersma has been one of the first to utilise FTIR in the assessment of containers since it first became possible to acquire the technology in a portable battery powered unit. He says “The problems with hazardous gases in cargo containers is now widely publicised and the requirement for testing is growing as employers fulfil their responsibility to protect the health and welfare of staff. However, the traditional testing methods are laborious, time-consuming and risk failing to find a potentially harmful gas.
“FTIR has long been established as an accurate technology for the simultaneous measurement of gaseous emissions from industrial processes, so when the Finnish company Gasmet developed a portable version we were very eager to investigate its feasibility in container testing.
“Following our initial tests, we worked with Gasmet to develop a configuration for the portable FTIR (a Gasmet DX4030) that would measure the 50 compounds of greatest concern. As a result, we are now able to test for all of these gases in around 3 minutes, which dramatically lowers the time taken for container inspection and greatly increases the number of containers that can be examined every day.
“A further major advantage of this technology is the minimal amount of calibration and maintenance that is necessary. A new instrument can be delivered pre-configured and factory calibrated and from then on the only calibration required is a quick zero check with nitrogen once or twice per day. As a result, it is not necessary to transport a large number of expensive, bulky calibration bottles.
“We now use a portable FTIR for all of our container examination work and we have also supplied a number of these units to freight companies that wish to conduct their own testing. This technology is now in use at Rotterdam, Amsterdam, Vlissingen, Antwerp and Hamburg, and a company providing ship fumigation and degassing is using portable FTIR all over the world.”

Fourier Transform Infra Red (FTIR)
An FTIR spectrometer obtains infrared spectra by first collecting an ‘interferogram’ of a sample signal with an interferometer, which measures all infrared frequencies simultaneously to produce a spectrum.

Over a number of years, Gasmet has established a library of reference spectra that now extends to simultaneous quantification of 50 gases or identification of unknowns from a collection of 5000+ gases. This means that it is possible to
reanalyze produced spectra with the instrument’s PC based software (Calcmet) and thereby to identify unknown gases – a major advantage of FTIR.

Whilst FTIR is able to analyse an enormous number of gases, the technique is not suitable for inert gases, homonuclear diatomic gases (e.g., N2, Cl2, H2, F2, etc) or H2S (detection limit too high).

High levels of accuracy and low levels of maintenance are achieved as a result of continuous calibration with a He-Ne laser, which provides a stable wavenumber scale. In addition, high spectral signal to noise ratio and high wavenumber precision are characteristic of the FTIR method. This yields high analytical sensitivity, accuracy and precision.

Summary
Millions of containers arrive in international ports every year and it is clear that a large proportion of them represent a significant hazard. Employers have a duty of care to protect their staff and court cases have found in favour of workers that have suffered ill-health from container gases. It is inevitable therefore that the amount of testing required will continue to increase so there will be a greater emphasis on speed, risk reduction and cost.

Portable FTIR gas analysers substantially reduce the amount of equipment required to test a container, but more importantly, the technology enables the simultaneous analysis of a large number of target compounds, which improves the effectiveness of the assessment and reduces risk to staff. The technique is also much faster and avoids the use of disposable equipment.