As long as there have been hydrocarbon processing industry (HPI) facilities to process crude oil and
intermediates, there have been instruments in place to assist
plant operators in measuring, recording and controling
pressures, flows, levels, temperatures and other process
variables. Initially, these were (by todays standards, at
least) crude Rube Goldberg-like instruments
utilizing ingenious mechanical and/or pneumatic mechanisms.

In the early petroleum refineries and petrochemical plants,
many of the control concepts conceived during the Industrial
Revolution of the 18th and 19th centuries were further
developed, refined and proven. It would be exceedingly
difficult, if not impossible, to operate a typical present-day
refinery or petrochemical plant without good,
closed-loop process control. The continuous, generally
steady-state nature of processing liquid petroleum feedstocks and intermediates lends
itself to closed-loop feedback control, using a standard set of
measurement, control and final control/actuation instruments.
However, with so many process variables, interactions and
nonlinearities involved, the process can overwhelm the human
mind. Additional challenges include more complex processes and
plants, less uniform feedstocks, increasing use of
unconventional feedstocks, variable energy costs,
and an increasingly difficult regulatory environment. These further increase
the reliance on automation and help explain the universal
acceptance of sophisticated process control and automation
systems in todays plants.

Anatomy of control

While it hasnt always been the case, process control
isfor the most partan exacting science made
possible by continuous advancements in control theory,
processing technologies and process-control instrumentation and
systems. These developments enable not just individual control
loops, but entire units, plants, and even integrated
petrochemical complexes to be operated in a close-to-optimum
manner (with optimum determined by product cost,
quality, yields, throughput and so on). Rather than those
quaint, if ingenious, mechanical and pneumatic instruments
utilizing basic feedback control, current process measurement
and control in HPI plants are performed by networked field
instrumentation with more onboard intelligence than early
mainframe computers and computer-based process automation
systems that are many times more powerful and capable than the
NASA control centers that sent the first men to the moon.
This article will briefly trace how we got from point
A to point B.

The early years

Modern process control instrumentation evolved from the
basic instruments and devices developed to control prime
movers, such as James Watts steam engine, during the
Industrial Revolution. In the 1850s, following the revelation
that crude oil could be refined via distillation into kerosine for
lighting purposes, which ended the whale-oil industry, the
first petroleum refineries were constructed in Europe and the US. Sensing the
opportunity, a handful of instrumentation companies (Honeywell,
Fisher, Foxboro, Bailey, Bristol, Taylor, Brown, etc.) began to
adapt their temperature, level and pressure gauges, and
pen-based, mechanical circular chart recorders to meet the
basic measurement and control needs of the early refineries.
During this industrial period, control was purely manual, with
field operators monitoring the gauges, taking notes and making
any needed process adjustments by manually opening or
throttling valves. Often, this meant the operator had to move
around quite a bit, including climbing up to places that are
restricted under current OSHA rules.

In the late 1890s and early 1900s, refineries began to
implement automated feedback control using a combination of
direct-connected, pneumatically operated instrumentation
utilizing ingenious combinations of nozzles, flapper valves,
bellows, springs and other mechanismsall powered by
compressed air. This provided a reasonable degree of on-off
control. Separate indicators and chart recorders were often
used to provide the human interface and record-keeping
functionalities. By combining basic mechanical pressure, level,
flow and temperature measurement instrumentation with
field-mounted pneumatic controllers and actuator-driven valves,
closed-loop feedback control became possible. Fisher Controls
(now part of Emerson Process Management) introduced its
field-mounted Wizard pneumatic controller in 1930, as shown in
Fig. 1A.

More sophisticated, large-case field-mounted pneumatic
instruments incorporating control, indicating and recording
functions began to appear on the scene around 1915. Initially,
these just provided on/off and/or proportional control
capabilities. Foxboro (now part of Invensys Operations
Management) introduced the Model 40, the first
proportional-plus-integral controller, in 19341935. In
1941, Taylor Instruments (now part of ABB) introduced the
Fulscope 100, the first controller to provide full
proportional/integral/derivative (PID) control capability in a
single unit. At present, PID methods remain the workhorse of
process control in refineries, petrochemical plants and other
process plants around the world.

To avoid tampering by operators who were worried about
keeping their jobs in the face of all this
automation, instrument suppliers had to start
putting locks on the cases to keep employees out. While this
practice certainly helped secure the instruments
integrity, it created many problems when the instrument had to
be adjusted or repaired, especially when no one could find the
key!

While relatively primitive compared to todays digital
controllers, these early mechanical/pneumatic instruments did a
surprisingly good job of controlling process variables. They
were so reliable that a number of them are still operating in
some older refineries and petrochemical plants.

Move from stand-alone instrumentation to control
rooms

A major breakthrough in process control occurred around 1938
with the introduction of pneumatic transmitters and large-case
instruments modified to accept pneumatically transmitted
signals from field-mounted transmitters and then sending
pneumatic control signals back to valve actuators. For the
first time, this made it possible to physically
separate the process-measurement instrumentation from the
recording/indicating/controlling instrumentation. This led to
the appearance of local control rooms in refineries and other
process plants, as shown in Fig. 2.

In some cases, these local control rooms were located up to
several hundred feet away from the processing units (but no
further, due to the distance limitations of the pneumatic
signals). With this instrumentation, control room operators
could remotely monitor process variables, setpoints and valve
outputs, and switch between automatic and manual control. To
ensure that different suppliers instrumentation would
function properly together, the industry soon established the
3-psi to 15-psi standard signal range for pneumatic
transmission, which remains in effect today.

Since control room space in HPI plants is usually limited
and always expensive to build, following World War II (WWII),
instrumentation suppliers focused on reducing the size of the
instruments mounted in the control room. The resulting
miniaturized controllers typically measured
approximately 6-in. by 6-in. on the front faceplate, complete
with a built-in indicator. With these smaller instruments, it
now became practical to embed the indicators, controllers and
recorders in appropriate locations on wall-sized graphical
diagrams, as shown in Fig. 3. These diagrams
illustrated the process unit, providing operators with a more
intuitive sense of how the instrumentation related to the
process. While these graphic panels helped reduce training
requirements and enabled operators to monitor process
operations more effectively, they still required fairly large
control rooms. This led to the development of
semi-graphic panels. These graphic displays used
less space and still provided much of the intuitiveness of full
graphic panels.

On the sensor side, suppliers began introducing a number of
measurement products during the invigorating post-WWII years
that would see wide applicability in the HPI. These included
the first pneumatic-differential pressure transmitter,
introduced by Foxboro in 1948. In conjunction with a simple
flange-mounted orifice plate, this provided a practical and
low-cost method to obtain accurate and repeatable fluid flow
measurements. In 1956, Beckman Instruments introduced the first
gas chromatograph for chemical analysis based on earlier
research by A. T. James and A. J. P. Martin.

At around this same time, we started seeing more analog
electronic instruments appearing in refinery control rooms.
They were often interfaced to existing pneumatic instruments
using current-to-pressure (I/P) and pressure-to-current (P/I)
converters. In 1951, the Swartwout Co. introduced its AutroniC,
the first electronic controller to use vacuum tubes. At the
1958 Instrument Society of America (ISA) show in Philadelphia,
Pennsylvania, Foxboro, Taylor Instruments, Honeywell, and Leeds
& Northrup (now part of Honeywell) all demonstrated
electronic controllers. In 1959, Bailey Controls (now part of
ABB) introduced the first fully solid-state electronic
controller, followed shortly by several other instrumentation
suppliers. During these years, we also began to see the shift
from single-loop to multi-loop electronic controllers. In 1952,
several engineers at Shell Development also presented the
feasibility of direct digital control (DDC) in the
Transactions of ASME (American Society of Mechanical
Engineers).

Direct digital control and the dawn of CIM

Exciting news came in March 1959, with the announcement
thatfollowing almost two and a half years of
effortTexaco and the Thompson Ramo Wooldridge (TRW) Co.
installed the first direct digital control computer online in a
refinery, as shown in Fig. 4. This heralded
what would later become known as the computer-integrated
manufacturing (CIM) era for the HPI. An excellent article
entitled, Texaco closes the loop, which appeared in
Business Week, April 4, 1959, chronicled the drama:
Shortly before 11 a.m. on March. 12, a veteran Texas Co.
process operator named Marvin Voight flipped the switch ... The
action closed the loop in the first fully automatic,
computer-controlled industrial process. Moments later, the most
vital parts of the 1,800-bpd polymerization unit at
Texacos Port Arthur (Texas) refinery were under the
unblinking eye and almost instantaneous control of a Thompson
Ramo Wooldridge Corp. RW-300, a desk-size digital computer
designed for just such control jobs as this. Texaco hopes the
computer will raise the plants efficiency by a healthy 6%
to 10%.

In addition to TRW, which contributed the computer, the
Bristol Co. (now part of Emerson Process Management) redesigned
its recording controllers to interface with the computer. Leeds
& Northrup supplied onstream analyzers to chart the
chemical content of the raw material and product streams.

The description of the computers function provided by
Charles Richker, Texacos chief process engineer at the
time, doesnt sound all that different than that for a
present-day optimization project. The computer ... gets
an analysis of incoming gas and outgoing gas; it senses and
measures pressure, flows and temperatures; it calculates
catalyst activity; then it weighs all these together and
decides what the processing unit should do to get the most
product for the least cost. Finally, it sets the controls and
rechecks its figuring.

According to Business Week, the computer cost was
$98,000 (in 1959 dollars); the custom I/O required to convert
analog measurement signals to digital language cost $36,000;
andnot surprisinglythe expense for engineering and
extra instrumentation was more than double that of the capital
cost for the computer and I/O hardware. So how did Texaco
cost-justify this major (for 1959) $300,000 science project? To
begin with, apparently, the company would have spent at least
one-third of that on new instrumentation for the polymerization
plant anyway. In hard terms, the company anticipated that the
new computer would boost conversion efficiency from the
85%87% considered the maximum for the most skilled
operators using automatic controllers to 93%, while saving up
to $75,000/yr by prolonging catalyst life. Based on this
information, Texaco expected an early payout on its
investment. In soft terms, according to a Texaco executive, the
company also expected to gain invaluable knowledge and
experience from full-scale operation.

Not surprisingly, the familiar question of whether all this
automation would make the human operator obsolete frequently
came up during the project. But, obviously, thats not the
case. While the computer does the dull repetitive work of
reading, calculating and resetting, if something should
go amiss, it would sound an alarm to which a human operator
would have to respond to handle the situation.

Following this initial direct digital control implementation
at Texacos Port Arthur refinery, TRW installed an RW-300
DDC computer at Monsantos new Chocolate Bayou, Texas,
petrochemical plant in 1960. During the same approximate time
period, IBM installed its first special-purpose computer for
process control, the IBM 1700, at an American Oil refinery in
Indiana, as shown in Fig.
5, and at a Standard Oil of California refinery, and
(in 1962) at a DuPont chemical plant. In 1961, IBM announced
its first standard computer for process control, the 1710
model. In the 1960s, Foxboro introduced several digital
systems, including the M9700 process computer and its Digital
Equipment Corp. (DEC) PCP-88-based DDC system, which was
installed at the Esso Aruba Refinery. The PCP-88 incorporated
dual DEC PDP-8s with a shared disk drive.

While much of the supplier activity during this time focused
on process control computers, automation suppliers also had to
figure out how to interface the installed base of largely
pneumatic field transmitters and actuators with their
new-fangled electronic controllers. In 1959, Honeywell
introduced the 4-mA to 20-mA analog signal, which, in
conjunction with P/I converters mounted in the control room,
provided the interface between the companys pneumatic
field instrumentation and electronic controllers. Ultimately, 4
mA to 20 mA won out over Foxboros proposed 10 mA to 50 mA
signal as an industry standard (ISA SP-50) for analog field
communications.

In 1965, DEC introduced its first minicomputer, the PDP-8.
Eventually, the company supplanted this with the PDP-11, which
was used widely for real-time process control applications. In
19681969, Honeywell introduced the Series 16 DDC, with a
modular hardware/software package. In the 1970s, Bailey
Controls and Taylor Instruments introduced their own DDC
systems. Rather than trying to build the computers themselves,
these early process control systems were based on DEC, MODCOMP,
Data General, and other companies minicomputers using
purpose-built software and I/O. In 1971, Foxboro introduced its
Fox 1, the first in a popular series of process control
computers and, in 1972, its SPEC 200 split-architecture analog
electronic controllers and INTERSPEC digital data highway. Also
in 1972, Fisher Controls introduced its Series 1000
split-architecture system, with separate controllers and
faceplates. In 1973, Taylor introduced real-time programming to
the control industry with the companys process-oriented
language (POL), an adaptation of BASIC programming, first used
on the Taylor 1010 and MOD 3000 control systems. Also, in this
appropriate time frame, process control engineers starting
experimenting with reusable control-block structures, which
arguably formed the basis for todays ubiquitous
object-oriented programming techniques.

The DCS era

Thanks to continuing improvements in solid-state
microprocessors and digital communications, automation
suppliers were able to squeeze ever-more-powerful functionality
into their electronic devices and systems. This led to the
development of the distributed control system (DCS). While some
might challenge this point, its generally accepted that
Honeywell coined the phrase and introduced the first DCS, the
total distributed control (TDC) system, in 1975. At just about
the same time, Yokogawa introduced the companys CENTUM
DCS.

Despite their high cost, TDC 2000 and CENTUM received strong
acceptance within the HPI, particularly in North America and
Japan. Within the next several years, several other companies,
including Bailey Controls, Fisher Controls, Fischer &
Porter (now part of ABB), Taylor Instruments, and Foxboro
introduced their own DCSs. The Foxboro SPECTRUM DCS, began to
show up in refineries and petrochemical plants around the
world, providing strong competition for Honeywell and Yokogawa.
Yamatake, which shared some intellectual property with
Honeywell and manufactured many of the TDC 2000 components,
also marketed the system in Japan.

Unlike the monolithic DDC systems which it replaced, the DCS
distributes much of the functionality across
multiple processors, helping to minimize the impact of failures
on the ability of the plant to produce product. In theory, at
least, the DCS architecture also moved some of the control
functionality closer to the process to minimize latencies. The
microprocessor-based, multi-loop controllers were connected to
supervisory computers, floppy disk drives, CRT-based operator
displays and push-button-equipped workstations, and line
printers, now often located in a central (rather than local)
control room via a proprietary data highway. In practice,
however, the harsh environmental conditions in HPI facilities required that both the
process controllers and I/O had to be mounted in
air-conditioned rack rooms, often located fairly close to, if
not immediately adjacent to, the central control room.

While DCSs offered far more control and real-time
information handling capabilities and other functionalities
than previously available, they were not without their obvious
flaws. For example, while the CRT-based operator displays
provided control-room operators with a remote view of one or
more process units while seated in front of a workstation in
the control room, the computer displays lacked the
intuitiveness of the full- and semi-graphic panel boards that
they supplanted. This increased training requirements and often
led to the operator switching from automatic to manual control
because they just didnt trust the computer. Also, as
operators in process plants know all too well, since its
relatively easy and inexpensive to configure process alarms in
software-based DCSs (compared to hard-wired annunciators),
there was also a tendency to configure unnecessary and often
confusing alarms, leading to often-terrifying alarm
storms.

In 1977, Honeywell introduced the first redundancy scheme
for a process controller, and, in the early 1980s, following
several years of development and an investment of approximately
$80 million, Honeywell introduced the companys
second-generation DCS, the TDC 3000. This system offered more
powerful controllers, new workstations, enhanced information
management and other important features. According to some
sources, Esso installed the first TDC 3000 system at the
companys Cold Lake Refinery in Alberta, Canada.
Throughout the 1980s, all DCS suppliers continued to enhance
their systems with new control and information management
capabilities, making the DCS the de facto platform for process
control.

Honeywell introduced the first smart pressure
transmitter, the ST3000, in 1983, followed shortly by Foxboro,
Yokogawa, Rosemount and others. When combined with the
respective suppliers proprietary digital field
communications scheme, these smart pressure, temperature and
flow transmitters improved performance over analog transmitters
by transmitting the process variable(s) and often secondary
measurements (such as ambient temperature, which is vital in
colder climates) in a precise digital format; allowed the
transmitters to be re-ranged remotely; and enabled operators
and maintenance technicians remote (if
relatively crude) access to transmitter status and diagnostics.
This eliminated many unnecessary trips to the field.

In 1989, 30 years after the first DDC computer went online
at Texacos Port Arthur refinery, the Purdue Reference
Model for Computer Integrated Manufacturing was published. This
evolved into todays ISA 95 architectural model and schema
for plant-to-enterprise integration.

APC pushes the boundaries of economics

The improved visibility into the process and robust PID and
advanced regulatory control capabilities provided by many DDC
and DCS platforms helped operators and control engineers in HPI
plants to stabilize control loops to a considerable degree and
also solve other problems. Recognizing the opportunity that
advanced regulatory control offered to help stabilize some of
their trickier, more interactive control loops, process control
engineers began to take greater advantage of these embedded
capabilities, often experimenting on their own to further
expand the envelope.

Model predictive control (MPC) was pioneered largely by
dedicated groups of control engineers at Shell (including both
Charlie Cutler and Steve Treiber) and other energy companies
beginning in the early 1970s. In the late 1980s, Shell Research
engineers in France developed the Shell Multivariable
Optimizing Controller (SMOC), a significant advancement. A
handful of small specialist companies, such as DMCC, Setpoint
and Treiber Controls (all three subsequently acquired by
AspenTech), plus Predictive Control in the UK and Profimatics
also began to develop, refine and license MPC technology. Not to be outdone,
control gurus at the major DCS suppliers (Honeywell, Foxboro,
Yokogawa, etc.) also either began to develop their own MPC
solutions or the companies acquired and further developed
licensed technology from third parties.

The resulting breakthroughs in MPC helped solve the
previously daunting multivariable constraint problems
encountered in many HPI processes. Advanced process control
(APC) software systems such as these, which typically ran in
separate supervisory computers, provided the DCS controllers
with the precise setpoints needed to further stabilize the
process, reduce variability, and safely operate processes
closer to physical constraints. Assuming that the plant process
control operators trusted the APC enough to keep it turned on
(which was not always the case), this typically provided
owner-operators with significant economic benefit.

Open control and real-time information systems

While they represented a step change in process control
technology over the all-in-one DDC systems and stand-alone
analog electronic controllers, DCSs were handicapped by their
closed, proprietary nature. This tended to speed obsolescence
and make it difficult and costly to integrate the DCSs with
other plant- or enterprise-level systems. Seeking to gain every
possible competitive advantage, DCS suppliers were loath to
share their proprietary communication technologies with other
suppliers, or to even open up their software codes to their
customers. This was particularly troublesome in HPI
enterprises, where lots of data and information need to flow
back and forth between the plant-level systems used to produce
products (DCS) and the enterprise-level planning and scheduling
systems.

However, during the 1980s, IBM, DEC, Microsoft, AT&T,
and other high-technology companies were investing huge sums of
money and dedicating their impressive brain trusts to advancing
and reducing the cost of the general-purpose information
technology (IT). This laid the foundation for the
Internet-enabled Information Age. These technologies included
open, standards-based operating systems (such as UNIX) and
graphical user interfaces, Ethernet networking, TCP/IP
communication protocols, object-based programming approaches,
and many others that we take for granted today. Unlike the DCS,
these technologies were based on open standards and many were
available commercially, almost literally right off the
shelf.

Initially, at least, automation suppliers either ignored, or
tried their best to ignore, these goings on, convincing
themselves that their industrial customers would never accept
using commercial off-the-shelf (COTS) technologies in their
plants. Foxboro, with the introduction of its I/A Series system
in 1987, was the first mainstream automation supplier to
incorporate UNIX, Ethernet and other commercial-type
technologies into a system designed to manage and control
mission-critical industrial processes. Foxboro also spent
millions of dollars developing the worlds first real-time
object manager. Since there were still performance and
availability concerns about Ethernet at the time, Foxboro
developed a redundant/fault-tolerant scheme for its
Ethernet-based process control network, which the company
intentionally referred to as a serial backplane,
rather than a network, because company officials
were concerned that industrial users wouldnt be able to
get their heads around the idea of an Ethernet-based
system.

Unfortunately for Foxboro, serious software and
manufacturing issues with the I/A Series system, which took
several years to fully resolve, prevented the company from
capitalizing on this innovative technology, with more
conventional and field-proven systems, such as Honeywells
TDC 2000/3000 and Yokogawas CENTUM DCSs continuing to
gain market share in the HPI plants in North America and Japan,
respectively. In Europe, ABB began to make inroads
into the HPI with its MOD 300 DCS.

The development of the DCS over the past 30 years has
closely mirrored that of the overall process automation
business, moving from proprietary technologies and closed
systems to COTS components, industry-standard field networks,
and Microsoft Windows operating systems. Today, the DCS has
moved from a system-centric architecture to one that is more
focused on supporting collaborative business processes and
helping owner-operators achieve operational excellence in their
process plants.

The drive toward openness in the 1980s gained momentum
through the 1990s with the increased adoption of COTS
components and IT standards. Probably the biggest transition
undertaken during this time was the move from the UNIX
operating system to the Windows environment, particularly for
human-machine interface (HMI) and data analysis and
presentation applications.

The invasion of Microsoft at the desktop and server layers
resulted in the development of technologies such as OLE for
process control (OPC), which is now a de facto industry
connectivity standard. Internet-based technology also began to make its
mark in industrial automation and the DCS world.

The impact of COTS was most pronounced at the hardware
layer. Standard computer components from manufacturers such as
Intel, Motorola, IBM, Sun Microsystems and Cisco Systems made
it cost prohibitive for DCS suppliers to continue making many
of their own servers, workstations and networking hardware
(although most DCS suppliers still assemble their own process
controllers and I/O modules, albeit using many COTS
components). COTS not only resulted in lower manufacturing
costs for the supplier, but also in steadily decreasing prices
for the end users, who were also becoming increasingly vocal
over what they perceived to be unduly high hardware costs. Some
suppliers that were previously stronger in the programmable
logic control (PLC) business, such as Rockwell Automation and
Siemens, have been able to leverage their expertise in
manufacturing control hardware to enter the DCS marketplace
with competitive offerings.

The current state of most process automation system
offerings available on the market today relies heavily on the
incorporation of international standards, a common control and
configuration environment, a common hardware platform, and a
common information infrastructure that is designed to
accommodate a wide range of applications from multiple
suppliers. Although the DCS of today has come a long way from
the almost totally proprietary world of the 80s, there is
still considerable progress to be made in the quest for full
standards adoption.

In ARCs latest global DCS market outlook study,
published in 2011, Honeywell retained its dominant position in
the global refining market, followed by
Yokogawa, Invensys, Emerson Process Management, ABB and
Siemens. In chemicals, Yokogawa has the leading position
globally, followed by Siemens, Honeywell, ABB, Emerson Process
Management, Invensys and Yamatake.

What goes around, comes around

While the HPI may be very conservative in some respects,
traditionally, this industry has been quick to accept new
technologies that offered clear potential to help companies
operate and maintain their complex assets better and more
efficiently. Process computers, direct digital control systems,
DCS systems, APC, simulation and plant-wide historians, are
just a few examples of the new technology adopted by the HPI.
This has also been the case for fieldbus, the technology that
provides a digital link between intelligent,
microprocessor-based field instrumentation and the host
DCS.

Unlike the 4-mA to 20-mA analog electronic standard for
communications between field instruments and the control system
(and the 3-psi to 15-psi pneumatic standard that preceded it),
which required point-to-point wiring (or pneumatic hoses) for
each device, digital fieldbus technology enables multiple field
devices to communicate with the host system on the same wire.
While fieldbus segment sizing, topology, and hazardous
area-related decisions can add engineering complexity and cost
compared to point-to-point analog field wiring, the wiring
savings alone can reduce fieldbus installation costs to a
significant degree.

More importantly, fieldbus provides bidirectional digital
communications between the field devices and the host system.
Thus, in addition to communicating one or more process variable
measurements for monitoring and/or control, the field devices
can communicate secondary measurements and important device
status and asset management-related information to the host
system. This eliminates the tedious and time-consuming effort
previously required to ring out and verify
potentially thousands of different field terminations during
system commissioning; reduces ongoing maintenance costs and effort by
eliminating unnecessary trips to the field; and  in
conjunction with appropriate softwareenables HPI plants
to implement highly effective condition-based plant asset
management strategies to help improve equipment availability,
while minimizing unnecessary maintenance. (As many
owner-operators have learned, too much work during planned
turnarounds is done based on habit, rather than on actual need;
while needed work sometimes goes unattended.)

That is the good news. As automation users
in HPI plants know all too well, due to the snail-like pace of
standardization efforts and significant politicking among
national standards bodies and automation suppliers, its
taken far too long for fieldbus standards and technology to
arrive at its current state. Initially, DCS suppliers offered
proprietary digital communications that provided many of the
benefits of todays standard fieldbus technology, albeit in a
single-vendor environment. In other words, each
suppliers smart transmitters could only communicate
digitally with its own control system. Not an optimum
situation, particularly for end users.

Pressed by their customers, national standards bodies in
Europe and North America, working in conjunction with
automation suppliers, initiated a number of different,
competing fieldbus standardization efforts. In 1999, in a
creative, if not terribly helpful, attempt to break the
stalemate and end these fieldbus wars, the
International Electrotechnical Commission (IEC) came up with a
compromise standard, IEC 61158, that recognized
eight different fieldbus approaches (including FOUNDATION
fieldbus, ControlNet/Ethernet IP, PROFIBUS, WorldFIP and
INTERBUS), grouping these into different types, but
creating common physical, data link and applications
layers.

While waiting for these fieldbus wars to sort
themselves out, many users simply avoided the issue altogether
by installing transmitters with HART communications capability.
While HART-enabled transmitters dont offer multi-drop
capability and still commonly communicate the process control
variable in analog, 4 mA to 20 mA (rather than the available
digital) format, they do offer the potential to access
transmitter status diagnostics and interact with the field
device remotely from the control room, maintenance shop or
plant reliability center. In the
past, many users found this particularly useful when
commissioning the instruments; far fewer used this capability
for ongoing asset management. However, this situation appears
to be changing as automation and other suppliers have
introduced plant asset management toolkits that fully exploit
the potential of HART.

Ultimately, in Europe, PROFIBUS emerged as an
industry fieldbus standard for both process and discrete
applications, gaining wide acceptance among suppliers and users
alike on that continent. In North America, FOUNDATION fieldbus
emerged as the digital fieldbus for process plants.

Not surprisingly, Shell and other leading HPI companies were
among the first to experiment with and implement FOUNDATION
fieldbus. They did so cautiously at first, with the initial
implementations in pilot plants and for other smaller-scale
projects, and ultimately, in virtually all their new plants
and/or major expansionprojects.

FOUNDATION fieldbus-enabled control valves and transmitters
include standard process control blocks, so thatonce
againfor simple process control loops, at least, both
measurement and control can be performed in the
fieldjust like in the early days of process control!
Whats more, as with the early single-loop controllers,
this can help limit the negative impact of instrument or other
faults compared to DCS controllers, which handle dozens, or
even hundreds, of control loops (albeit, normally in redundant
configurations designed to enhance fault tolerance).

Since DCS process controllers are often physically located
at a significant distance from the process, fieldbus-enabled
control in the field can sometimes reduce the time latencies
involved when a measurement signal has to be transmitted from
the field devices to a DCS controller and the control signal
transmitted from the controller back to the final control
device in the field. While a study commissioned by the Fieldbus
Foundation revealed that this can help improve performance,
particularly in fast-acting control loops, owner-operators have
been slow to accept fieldbus-enabled control in the field to
date.

ARC believes that this is probably because users are already
very comfortable and generally satisfied, with their DCS
controllers, and because control in the field doesnt add
value for the types of interactive control loops found in many
HPI processes.

Whats ahead?

While the state-of-the-art in process automation systems has
only advanced incrementally in recent years, ARC believes that
well soon see some major advancements emerge in
industrial automation, andif the past is any
lessonowner-operators in the HPI will likely to be among
the first to implement many of these advancements. Some of
these advancements include:

Even smarter field devices that are capable of conveying
their health in absolute terms

Increased use of cloud computing to serve data to
authorized users, anywhere, at any time

Increased use of wireless field devices, including
wireless measurements for process monitoring and
control

Increased use of tablets, smartphones and other mobility
devices by plant operators, maintenance technicians, engineers
and others

Increased use of advanced analytics for real-time
decision support, fueled by the big data
currently buried in many plant historians.

ARC analysts will aim to keep HP readers informed
about these and other trends in our monthly Integration Strategies
columns. HP

The authors

Paul Miller is a senior editor/analyst
at ARC Advisory Group and has 25 years of experience in
the industrial automation industry. He has published
numerous articles in industry trade publications. Mr.
Miller follows both the terminal automation and
water/wastewater sectors for ARC.

Dick Hill is vice president of ARC
Advisory Group, Dedham, Massachusetts, responsible for
developing the strategic direction for ARC products,
services and geographical expansion. He is responsible
for covering advanced software business worldwide. In
addition, he provides leadership for support of
ARCs automation team and clients. Mr. Hill has
over 30 years of experience in manufacturing and
automation. He has broad international experience with
The Foxboro Co. Prior to Foxboro, Mr. Hill was a senior
process control engineer with BP Oil, developing and
implementing advanced process control applications.
Prior to joining ARC, he was the US general manager of
Walsh Automation, a major engineering consulting firm
and supplier of CIM solutions to the pulp and paper, petrochemicals,
pharmaceutical, and other process and manufacturing
industries. He is a graduate of the Lowell
Technological Institute with a BS degree in chemical
engineering.

Dave Woll is vice president of the
consulting services at ARC Advisory Group where he
provides high-level consulting services for ARC
clients. He has been with ARC since 1997 and has been
defining and applying process automation for over 35
years. This includes the marketing and application of
control, safety, SCADA, measurement systems and
business Integration. Prior to ARC,
Mr. Woll held numerous positions at both The Foxboro
Co. and Bristol Babcock. He holds a BS degree in
electrical engineering from the University of
Connecticut.

While certainly functional, the only way those early pneumatic
controllers could be tuned to provide the desired control
response was through tedious and wasteful trial and error,
which didnt always work either. During WWII, two
engineers at Taylor Instruments, John Ziegler and Nathaniel
Nichols, spent a lot of time tinkering around with PID
simulations on the companys Fulscope 100 controller until
they came up with a satisfactory solution. In 1942, they
published their now-famous paper, Optimum settings for
automatic controllers. This established clear rules for
tuning PID controllers in refineries, petrochemicalfacilities and other process plants.
This came in very handy during the ensuing war years, when
now-well-tuned controllers helped chemical plants produce
synthetic rubber for tires and other wartime necessities,
helped refineries produce massive quantities of gasoline and
diesel to fuel jeeps, trucks, tanks and heavy equipment; and
produce newly developed, high-octane aviation fuel for fighter
planes, strategic bombers and other aircraft.
HP

Have your say

All comments are subject to editorial review.
All fields are compulsory.