TWO
THE SYSTEMIC REVOLUTION: A NEW CULTURE

1. HISTORY OF A GLOBAL APPROACH

The fundamental concepts that recur most often in the biological, ecological,
and economic models of the preceding chapter can easily be grouped into
several major categories: energy and its use; flows, cycles, and stocks,
communication networks; catalysts and transforming agents; the readjustment
of equilibriums; stability, growth, and evolution. And, above all, the
concept of the system-living system, economic system, ecosystem-that binds
together all the others.

Each of these concepts applies to the cell as it does to the economy,
to an industrial company as it does to ecology. Beyond the vocabulary,
the analogies, and the metaphors there appears to exist a common approach
that makes it possible to understand better and describe better the organized
complexity.

The Systemic Approach

This unifying approach does indeed exist. It was born in the course
of the last thirty years from the cross-fertilization of several disciplines-
biology, information theory, cybernetics, and systems theory. It is not
a new concept, what is new is the integration of the disciplines that has
come about around it. This transdisciplinary approach is called the systemic
approach, and this is the approach that I present here in the concept
of the macroscope. It is not to be considered a "science," a
"theory," or a "discipline," but a new methodology
that makes possible the collection and organization of accumulated knowledge
in order to increase the efficiency of our actions.

The systemic approach, as opposed to the analytical approach, includes
the totality of the elements in the system under study, as well as their
interaction and interdependence.

The systemic approach rests on the conception of system. While often
vague and ambiguous, this conception is nevertheless being used today in
an increased number of disciplines because of its ability to unify and
integrate.

According to the most widely used definition, "a system is a set
of interacting elements that form an integrated whole" (
see notes ). A city, a cell, and a body, then, are systems. And so
are an automobile, a computer, and a washing machine! Such a definition
can be too general. Yet no definition of the word system
can be entirely satisfying; it is the conception of system that
is fertile-if one measures its extent and its limits.

The limits are well known. Applied too easily, the systems concept is
often used wildly in the most diverse areas: education, management data
processing, politics. For numerous specialists it is only an empty notion:
trying to say everything, it evokes nothing in the end.

Yet its reach cannot be held to the precision of definitions; the concept
of system is not readily confined. It reveals and enriches itself only
in the indirect illumination of the many clusters of analogous, modeled
and metaphoric expression. The concept of system is the crossroads of the
metaphors; ideas from all the disciplines travel there. Reaching beyond
single analogies, this circulation makes possible the discovery of what
is common among the most varied systems. It is no longer a matter of reducing
one system to another, better-known one (economics to biology for example);
nor does it mean transposing knowledge from a lower level of complexity
to another level. It is a question of identifying nonvariants- that
is, the general, structural, and functional principles- and being able
to apply them to one system as well as another. With these principles it
becomes possible to organize knowledge in models that are easily transferred
and then to use some of these models in thought and action. Thus the concept
of system appears in two complementary aspects: it enables the organization
of knowledge and it renders action more efficient.

In concluding this introduction to the concept of system, we need to
locate the systemic approach with respect to other approaches with which
it is often confused.

The systemic approach embraces and goes beyond the cybernetics
approach (N. Wiener, 1948), whose main objective is the study of control
in living organisms and machines ( see notes
).

It must be distinguished from General Systems Theory (L. von
Bertalanffy, 1954), whose purpose is to describe in mathematical language
the totality of systems found in nature.

It turns away from systems analysis, a method that represents
only one tool of the systemic approach. Taken alone, it leads to the reduction
of a system to its components and its elementary interactions.

The systemic approach has nothing to do with a systematic approach
that confronts a problem or sets up a series of actions
in sequential manner, in a detaile

d way, forgetting no element and leaving nothing to chance.

Perhaps one of the best ways of seeing the strength and the impact of
the systemic approach is to follow its birth and development in the lives
of men and institutions.

The Search for New Tools

The process of thought is at once analytic and synthetic, detailed and
holistic. It rests on the reality of facts and the perfection of detail.
At the same time, it seeks factors of integration, catalytic elements for
invention and imagination. At the very moment that man discovered the simplest
elements of matter and life, he tried, with the help of the famous metaphors
of the "clock," the "machine," the "living organism,"
to understand better the interactions between these elements.

Despite the strengths of these analogical models, thought is dispersed
in a maze of disciplines each secluded one from another by communication-tight
enclosures. The only way to master these numbers, to understand and predict
the behavior of the multitudes made up of atoms, molecules, or individuals,
is to reduce them to statistics and to derive from them the laws of unorganized
complexity.

The theory of probability the kinetic theory of gases, thermodynamics,
and population statistics all rely on unreal, "ghostly" phenomena,
on useful but ideal simplifications that are almost never found in nature.
Theirs is the universe of the homogeneous, the isotope, the additive, and
the linear; it is the world of "perfect" gases, of "reversible"
reactions, of "perfect" competition.

In biology and in sociology, phenomena integrate duration and irreversibility.
Interactions between elements count as much as the elements themselves.
Thus we need new tools with which to approach organized complexity, interdependence,
and regulation.

The tools emerged in the United States in the 1940s from the crossfertilization
of ideas that is common in the melting pot of the large universities.

In illustrating a new current of thought, it is often useful to follow
a thread. Our thread will be the Massachusetts Institute of Technology
(MIT). In three steps, each of about ten years, MIT was to go from the
birth of cybernetics to the most critical issue, the debate on limits to
growth. Each of these advances was marked by many travels back and forth-typical
of the systemic approach-between machine, man, and society. In the course
of this circulation of ideas there occurred transfers of method and terminology
that later fertilized unexplored territory.

In the forties the first step forward led from the machine to the living
organism, transferring from one to the other the ideas of feedback and
finality and opening the way for automation and computers (Fig. 39).

">

In the fifties it was the return from the living organism to the machine,
with the emergence of the important concepts of memory and pattern recognition,
of adaptive phenomena and learning, and new advances in bionics: artificial
intelligence and industrial robots.[1]
There was also a return from the machine to the living organism, which
accelerated progress In neurology, perception, the mechanisms of vision
(Fig. 40).

">

In the sixties MIT saw the extension of cybernetics and system theory

to industry, society, and ecology (Fig. 41).

">

Three men can be regarded as the pioneers of these great breakthroughs:
the mathematician Norbert Wiener, who died in 1964, the neurophysiologist
Warren McCulloch, who died in 1969, and Jay Forrester, professor at the
Sloan School of Management at MIT.

There are of course other men, other research teams, other universities-in
the United States as well as in the rest of the world-that have contributed
to the advance of cybernetics and system theory. I will mention them whenever
their course of research blends with that of the MIT teams.

"Intelligent" Machines

Norbert Wiener had been teaching mathematics at MIT since 1919. Soon
after his arrival there he had become acquainted with the neurophysiologist
Arturo Rosenblueth, onetime collaborater of Walter B. Cannon (who gave
homeostasis its name) ( see page 43 )
and then at Harvard Medical School. Out of this new friendship would be
born, twenty years later, cybernetics. With Wiener's help Rosenblueth set
up small interdisciplinary teams to explore the no man's land between the
established sciences.

In 1940 Wiener worked with a young engineer, Julian H. Bigelow, to develop
automatic range finders for antiaircraft guns. Such servomechanisms are
able to predict the trajectory of an airplane by taking into account the
elements of past trajectories. During the course of their work Wiener and
Bigelow were struck by two astonishing facts: the seemingly "intelligent"
behavior of these machines and the "diseases" that could affect
them. Theirs appeared to be "intelligent" behavior because they
dealt with "experience" (the recording of past events) and predictions
of the future. There was also a strange defect in performance: if one tried
to reduce the friction, the system entered into a series of uncontrollable
oscillations.

Impressed by this disease of the machine, Wiener asked Rosenblueth whether
such behavior was found in man. The response was affirmative: in the event
of certain injuries to the cerebellum, the patient cannot lift a glass
of water to his mouth; the movements are amplified until the contents of
the glass spill on the ground. From this Wiener inferred that in order
to control a finalized action (an action with a purpose) the circulation
of information needed for control must form "a closed loop allowing
the evaluation of the effects of one's actions and the adaptation of future
conduct based on past performances." This is typical of the guidance
system of the antiaircraft gun, and it is equally characteristic

of the nervous system when it orders the muscles to make a movement
whose effects are then detected by the senses and fed back to the brain
(Fig. 42).

Thus Wiener and Bigelow discovered the closed loop of information necessary
to correct any action-the negative feedback loop-and they generalized this
discovery in terms of the human organism.

During this period the multidisciplinary teams of Rosenblueth were being
formed and organized. Their purpose was to approach the study of living
organisms from the viewpoint of a servomechanisms engineer and, conversely,
to consider servomechanisms with the experience of the physiologist. An
early seminar at the Institute for Advanced Study at Princeton in 1942
brought together mathematicians, physiologists, and mechanical and electrical
engineers. In light of its success, a series of ten seminars was arranged
by the Josiah Macy Foundation. One man working with Rosenblueth in getting
these seminars under way was the neurophysiologist Warren McCulloch, who
was to play a considerable role in the new field of cybernetics. In 1948
two basic publications marked an epoch already fertile with new ideas:
Norbert Wiener's Cybernetics, or Control and Communication in the Animal
and the Machine; and The Mathematical Theory of Communication
by Claude Shannon and Warren Weaver ( see
notes ). The latter work founded information theory.

The ideas of Wiener, Bigelow, and Rosenblueth caught fire like a trail
of powder. Other groups were formed in the United States and around the
world, notably the Society for General Systems Research, whose publications
deal with disciplines far removed from engineering, such as sociology,
political science, and psychiatry.

The seminars of the Josiah Macy Foundation continued, opening to new
disciplines: anthropology with Margaret Mead, economics with Oskar Morgenstern.
Mead urged Wiener to extend his ideas to society as a whole. Above all,
the period was marked by the profound influence of Warren McCulloch, director
of the Neuropsychiatric Institute at the University of Illinois.

At the conclusion of the work of his group on the organization of the
cortex of the brain, and especially after his discussions with Walter Pitts,
a brilliant, twenty-two-year-old mathematician, McCulloch understood that
a beginning of the comprehension of cerebral mechanisms (and their simulation
by machines) could come about only through the cooperation of many disciplines.
McCulloch himself moved from neurophysiology to mathematics, from mathematics
to engineering.

Walter Pitts became one of Wiener's disciples and contributed to the
exchange of ideas between Wiener and McCulloch; it was he who succeeded
in convincing McCulloch to install himself at MIT in 1952 with his entire
team of physiologists.

From Cybernetics to System Dynamics

In this famous melting pot, ideas boiled. From one research group to
another the vocabularies of engineering and physiology were used interchangeably.
Little by little the basics of a common language of cybernetics was created:
learning, regulation, adaptation, self-organization, perception, memory.
Influenced by the ideas of Bigelow, McCulloch developed an artificial retina
in collaboration with Louis Sutro of the laboratory of instrumentation
at MIT. The theoretical basis was provided by his research on the eye of
the frog, performed in 1959 in collaboration with Lettvin, Maturana, and
Pitts. The need to make machines imitate certain functions typical of living
organisms contributed to the speeding up of progress in the understanding
of cerebral mechanisms. This was the beginning of bionics and the research
on artificial intelligence and robots.

Paralleling the work of the teams of Wiener and McCulloch at MIT, another
group tried to utilize cybernetics on a wider scope. This was the Society
for General Systems Research, created in 1954 and led by the biologist
Ludwig von Bertalanffy. Many researchers were to join him: the mathematician
A. Rapoport, the biologist W. Ross Ashby, the biophysicist N. Rashevsky,
the economist K. Boulding. In 1954 the General Systems Yearbooks
began to appear; their influence was to be profound on all those who sought
to expand the cybernetic approach to social systems and the industrial
firm in particular.

During the fifties a tool was developed and perfected that would permit
organized complexity to be approached from a totally new angle-the computer.
The first ones were ENIAC (1946) and EDVAC or EDSAC (1947). One of the
fastest was Wirlwind II, constructed at MIT in 1951. It used-for the first
time-a superfast magnetic memory invented by a young electronics engineer
from the servomechanisms laboratory, Jay W. Forrester (
see notes ).[2]

As head of the Lincoln Laboratory, Forrester was assigned by the Air
Force in 1952 to coordinate the implementation of an alert and defense
system, the SAGE system, using radar and computers for the first time.[3]
Its mission was to detect and prevent possible attack on American territory
by enemy rockets. Forrester realized the importance of the systemic approach
in the conception and control of complex organizations involving men and
machines in "real time": the machines had to be capable of making
vital decisions as the information arrived.

In 1961, having become a professor at the Sloan School of Management

at MIT, Forrester created Industrial Dynamics. His object was to regard
all industries as cybernetics systems in order to simulate and to try to
predict their behavior.

In 1964, confronted with the problems of the growth and decay of cities,
he extended the industrial dynamics concept to urban systems (Urban Dynamics).
Finally, in 1971, he generalized his earlier works by creating a new discipline,
system dynamics, and published World Dynamics. This book was the
basis of the work of Dennis H. Meadows and his team on the limits to growth.
Financed by the Club of Rome, these works were to have worldwide impact
under the name MIT Report.

Figure 43 brings together the researchers and teams mentioned in the
preceding pages and recalls the main lines of thought opened up by their
work.

2. WHAT IS A SYSTEM?

The systemic approach depends on cybernetics and system theory. Perhaps
it will be useful here to recall a few definitions. Cybernetics is the
discipline that studies communication and control in living beings and
in the machines built by man. A more philosophical definition, suggested
by Louis Couffignal in 1958, considers cybernetics as "the art
of assuring efficiency of action" (
see notes ). The word cybernetics was reinvented by Norbert
Wiener in 1948 from the Greek kubernetes, pilot, or rudder.[4]

One of the very first cybernetics mechanisms to control the speed of
the steam engine, invented by James Watt and Matthew Boulton in 1788, was
called a governor, or a ball regulator. Cybernetics has in fact
the same root as government: the art of managing and directing highly complex
systems.

There are definitions of the word system other than that given
at the beginning of this chapter. This is the most complete: "a
system is a set of elements in dynamic interaction, organized for a goal.
"

The introduction of finality (the goal of the system) in this definition
may be surprising. We understand that the purpose of a machine has been
defined and specified by man; but how does one speak of the purpose of
a system like the cell? There is nothing mysterious about the "goal"
of the cell. It suggests no scheme; it declares itself a posteriori:
to maintain its structure and replicate itself. The same applies to the
ecosystem. Its purpose is to maintain its equilibrium and permit the development
of life. No one has set the level of the concentration of oxygen in the
air, the average temperature of the earth, the composition of the oceans.
They are maintained, however, within very strict limits.

The preceding definition is distinct from that of a certain structuralist
tendency, for which a system is a closed structure. Such a structure cannot
evolve but passes through phases of collapse due to an internal disequilibrium.

In fact such definitions, as we said, are too general to be truly useful.
They do not allow clarification of such ambiguities of expression as "a
political system," "a computer system," and "a system
of transportation." On the other hand, it seems to be much more profitable
to enrich the concept of systems by describing in the most general way
the principal characteristics and properties of systems, no matter what
level of complexity they may belong to.[5]

Open Systems and Complexity

Each of the Russian dolls described in the first chapter is an open
system of high complexity. These are important concepts that we must examine.

An open system is in permanent relation with its environment,
or, in general terms, with its ecosystem. It exchanges energy, matter,
and information used in the maintenance of its organization to counter
the ravages of time. It dumps into the environment entropy, or "used"
energy. By virtue of the flow of energy through the system-and despite
the accumulation of entropy in the environment-the entropy in an open

system is maintained at a relatively low level. This is another way
of saying that the organization of the system is maintained. Open systems
can decrease entropy locally and can even evolve toward states of higher
complexity.

An open system, then, is a sort of reservoir that fills and empties
at the same speed: water is maintained at the same level as long as the
volume of water entering and leaving remain the same (Fig. 44).

To emphasize the generality and importance of the concept of the open
system, I have used the same kind of basic diagram for the industrial firm,
the city, the living organism, and the cell. One must keep in mind that
open system and ecosystem (or environment) are in constant interaction,
each one modifying the other and being modified in return (Fig.
45).

A closed system exchanges neither energy nor matter nor information
with its environment; it is totally cut off from the outside world. The
system uses its own internal reserve of potential energy. As its reactions
take place, entropy advances irreversibly. When the thermodynamic equilibrium
is reached, entropy is maximized: the system can no longer produce work.
Classical thermodynamics considers only closed systems, but a closed system
is an abstraction of physicists, a simplification that has made possible
the fundamental laws of physical chemistry (Fig. 46).

How to define complexity? Or, to avoid definitions, how to illustrate
and enrich the significance of the concept? Two factors are important:
the variety of elements and the interaction between elements.

A gas, a simple system, is made up of similar elements (molecules of
oxygen, for example) that are unorganized and display weak interactions.
On the other hand, a cell-a complex system-includes a large variety of
organized elements in tight interaction with one another. One could illustrate
the concept of complexity with several points.

A complex system is made up of a large variety of components
or elements that possess specialized functions.

These elements are organized in internal hierarchical levels
(in the human body, for example, cells, organs, and systems of organs).

The different levels and individual elements are linked by a great
variety of bonds. Consequently there is a high concentration of
interconnections.[6]

The interactions between the elements of a complex system are of a
particular type; they are nonlinear interactions.

The effects of simple linear interactions can be described by mathematical
relationships in which the variables are increased or diminished by a constant
quantity (as in the case of a car moving at the same average speed on a
highway) (Fig. 47).

However, in the case of nonlinear interactions the variables are multiplied
or divided by coefficients which are themselves functions of other variables.
This is the case of exponential growth (the quantity plotted on the vertical
axis doubles by unit of time) or of an S curve (rapid growth following
stabilization) (Fig. 48).

Another example of a nonlinear relationship is the response of enzymes
to different concentrations of substrate (molecules that they transform).
In some cases, in the presence of inhibitors, the speed of transformation
is slow. In others, in the presence of activators, the reaction is rapid
up to the saturation of the active sites. In Figure 49 below this situation
is expressed in curves that show the number of molecules transformed (I)
in the presence of an inhibitor, (2) in the presence of an activator, and
(3) according to the relative concentration of inhibitors and activators.

Linked to the concept of complexity are those of the variety of elements
and interactions, of the nonlinear aspect of interactions, and of organized
totality. There follows a very special behavior of complex systems that
is difficult to predict. It is characterized by the emergence of new properties
and great resistance to change.

Structural and Functional Aspects of Systems

Two groups of characteristic features make it possible to describe in
a very general way the systems that can be observed in nature. The first
group relates to their structural aspect, the second to their functional
aspect.

The structural aspect concerns the organization in space of the components
or elements of a system; this is spatial organization. The functional aspect
concerns process, or the phenomena dependent on time (exchange, transfer,
flow, growth, evolution); this is temporal organization.

It is easy to connect the structural and functional elements by using
a simple graphic illustration, a "symbolic meccano," which makes
it possible to construct models of different systems and to understand
better the role of interactions.[7]

The principal structural characteristics of every system are:

A limit that describes the boundaries of the system and separates
it from the outside world. It is the membrane of a cell, the skin of a
body, the walls of a city, the borders of a country.

Elements or components that can be counted and assembled in categories,
families, or populations. They are the molecules of a cell, the inhabitants
of a city, the personnel of an industrial firm and its machines, institutions,
money, goods.

Reservoirs in which the elements can be gathered and in which
energy, information, and materials are stored. In the first chapter numerous
examples were given: reservoirs in the atmosphere and in the sediments,
reservoirs of hydrocarbons; stores of capital and technology; memory banks,
libraries, films, tape recordings; the fats of the body, glycogen of the
liver. The symbolic representation of a reservoir is a simple rectangle.

A communication network that permits the exchange of energy,
matter, and information among the elements of the system and between different
reservoirs. This network can assume the most diverse forms: pipes, wires,
cables, nerves veins, arteries, roads, canals, pipelines, electric transmission
lines. The network is represented in diagrams by lines and dotted lines
that link the reservoirs or other variables of the model.

The principal functional characteristics of every system are:

Flows of energy, information, or elements that circulate between
reservoirs. They are always expressed in quantities over periods of time.
There are flows of money (salaries in dollars per month), finished products
(number of cars coming off the assembly line by the day or the month),
people (number of travelers per hour), information (so many bits of information
per microsecond in a computer). Flows of energy and materials raise or
lower the levels in the reservoirs. They circulate through the networks
of communication and are represented symbolically by a heavy black arrow
(flows of information are indicated by a dotted-line arrow). Information
serves as a basis for making the decisions that move the flows which maintain
reserves or raise and lower the levels of the reservoirs.

Valves that control the volume of various flows. Each valve is
a center of decision that receives information and transforms it into action:
a manager of industry, an institution, a transforming agent, a catalyst
such as an enzyme. Valves can increase or diminish the intensity of flows.
Their symbolic representation is that of a valve or a faucet superimposed
on a line of flow (Fig. 50).

Delays that result from variations in the speed of circulation
of the flows, in the time of storage in the reservoirs, or in the "friction"
between elements of the system. Delays have an important role in the phenomena
of amplification or inhibition that are typical of the behavior of complex
systems.

Feedback loops or information loops that play a decisive part
in the behavior of a system through integrating the effects of reservoirs,
delays, valves, and flows. Numerous examples of feedback were given in
the first chapter: population control, price equilibriums, the level of
calcium in the plasma (see pp. 10, 25, 43). There are two kinds of feedback
loops. Positive feedback loops contain the dynamics for change in a system
(growth and evolution, for example); negative feedback loops represent
control and stability, the reestablishment of equilibriums and self-maintenance.

The model in Figure 51 combines all the structural and fundamental symbols
described above. And here it is possible to illustrate the difference between
a positive and a negative feedback loop. If the information received at
the level of the reservoir indicates that the level is rising, the decision
to open the valves wider will allow overflow; if the level is falling,
the decision to reduce the outflow will lead to a rapid drying up of the
reservoir. This is the work of a positive feedback loop, working toward
infinity or toward zero. In contrast, the decision to diminish the flow
when the level increases (and the inverse) maintains the level at a constant
depth. This is the work of a negative feedback loop.

3. SYSTEM DYNAMICS: THE INTERNAL CAUSES

The basic functioning of systems depends on the interplay of feedback
loops, flows, and reservoirs. They are three of the most general concepts
of the systemic approach, and they are the keys to the juxtaposition of
very different areas from biology to management, from engineering to ecology.

Positive and Negative Feedback

In a system where a transformation occurs, there are inputs and
outputs. The inputs are the result of the environment's influence
on the system, and the outputs are the influence of the system on the environment.
Input and output are separated by a duration of time, as in before and
after, or past and present (Fig. 52).

In every feedback loop, as the name suggests, information about the
result of a transformation or an action is sent back to the input
of the system in the form of input data. lf these new data facilitate and
accelerate the transformation in the same direction as the preceding results,
they are positive feedback-their effects are cumulative. If the new data
produce a result in the opposite direction to previous results, they are
negative feedback-their effects stabilize the system. In the first case
there is exponential growth or decline; in the second there is maintenance
of the equilibrium (Fig. 53).

Positive feedback leads to divergent behavior: indefinite expansion
or explosion (a running away toward infinity) or total blocking of activities
(a running away toward zero). Each plus involves another plus; there is
a snowball effect. The examples are numerous: chain reaction, population
explosion, industrial expansion, capital invested at compound interest,
inflation, proliferation of cancer cells. However, when minus leads to
another minus, events come to a standstill. Typical examples are bankruptcy
and economic depression (Fig. 54).

In either case a positive feedback loop left to itself can lead only
to the destruction of the system, through explosion or through the blocking
of all its functions. The wild behavior of positive loops-a veritable death
wish-must be controlled by negative loops. This control is essential for
a system to maintain itself in the course of time.

Negative feedback leads to adaptive, or goal-seeking behavior: sustaining
the same level, temperature, concentration, speed, direction. In some cases
the goal is self-determined and is preserved in the face of evolution:
the system has produced its own purpose (to maintain, for example, the
composition of the air or the oceans in the ecosystem or the concentration
of glucose in the blood). In other cases man has determined the goals of
the machines (automats and servomechanisms).

In a negative loop every variation toward a plus triggers a correction
toward the minus, and vice versa. There is tight control; the system oscillates
around an ideal equilibrium that it never attains. A thermostat or a water
tank equipped with a float are simple examples of regulation by negative
feedback (Fig. 55).[8]

Flows and Reservoirs

The dynamic behavior of every system, regardless of its complexity,
depends in the last analysis on two kinds of variables: flow variables
and state or level variables. ( see notes
). The first are symbolized by the valves that control the flows, the
second (showing what is contained in the reservoirs) by rectangles. The
flow variables are expressed only in terms of two instants, or in relation
to a given period, and thus are basically functions of time. The state
(level) variables indicate the accumulation of a given quantity in the
course of time; they express the result of an integration. If time stops,
the level remains constant (static level) while the flows disappear-for
they are the result of actions, the activities of the system.

Hydraulic examples are the easiest to understand. The flow variable
is represented by the flow rate, that is, the average
quantity running off between two instants. The state variable is the quantity
of water accumulated in the reservoir at a given time. If you replace the
flow of water by a flow of people (number of births per year), the state
variable becomes the population at a given moment.

The difference between flow variables and state variables is illustrated
perfectly by the difference between the profit and loss statement and the
balance sheet of a firm. The profit and loss statement is concerned with
the period between two instants, say January I and December 31. It consists
of an aggregation of flow variables: salaries paid, total purchases, transportation,
interest costs, total sales. The balance sheet applies to one date only,
say December 31. It is an instant picture of the situation of the company
at that single moment in time. The balance sheet contains a variety of
state variables: on the assets side, real estate and property, inventory,
accounts receivable; on the liabilities side, capital, long-term debt,
accounts payable.

Three examples will serve to explain the relationships between flow
variables and state variables and will clarify several of the ways in which
they act at the different levels of a complex system.

Balancing one's budget. A bank account (reservoir) fills or empties
in accordance with deposits or withdrawals of money. The balance in the
account at a given date is a state variable. Wages and other income of
the holder of the account represent a flow variable expressed in a quantity
of money for a period of time; expenses correspond to the flow variable
of output. The valves that control these two flows are the decisions that
are made based on the state of the account (Fig. 56).

"To make ends meet" is to establish an equilibrium of the
flows: income (input) equal to expenses (output). The bank account is kept
at a stationary level. This is a case of dynamic equilibrium.[9]

When the input flow is greater than the output flow, money accumulates
in the account. The owner of the account is "saving." In saving
he increases his total income by the amount of interest his savings earn
(an example of a positive feedback loop).

When the output flow is greater than the input flow, debts are accumulated.
This situation can deteriorate further, for interest on debts increases
output (a positive feedback loop toward zero). If the situation is not
remedied, it can lead to the exhaustion of funds in a short time.

The maintenance of equilibrium requires tight control. Control can be
exercised more easily on the output flow valve (expenses) than on the input
flow valve (income). This control imposes a choice of new constraints:
the reduction or the better distribution of expenditures. In contrast,
to make one's income increase rapidly one has to have reserves (savings)
at his disposal-or benefit by a raise in salary.

Managing a company. In the short term the manager uses internal
indicators such as sales, inventory, orders placed, changes in production
margins, productivity, delivery delays, money in reserve. For longer periods
he consults his balance sheet, profit and loss statement, and such outside
indicators as the prime rate of interest, manpower, growth of the economy.
Using the differences between these indicators and the business forecasts,
the manager takes what corrective measures are necessary. Consider two
examples related to inventory and cash management.

An inventory is a reservoir filled by production and emptied by sales.
When the inventory is too high, the manager can influence the flow of sales
either by lowering prices or by reinforcing marketing. He can also control
the input flow in the short term by slowing down production (Fig. 57).

The reverse situation is that of strong demand. The inventory level
drops rapidly, and the manager then tries to increase production. If demand
remains strong, the company-its inventory low-will require longer delays
in delivery. Customers will not want to wait and will turn to a competitor.
Demand then decreases and the inventory level climbs. A negative feedback
loop helps the business leader-or works to his disadvantage if he has increased
production too much without having foreseen the change in the market. This
is why the manager must control flow and inventory while taking into account
delays and different response times.

One of the most common cash problems for small businesses results from
the time lag between the booking of orders, billing, and the receipt of
payment. Regular expenses (payroll, purchases, rent) and the irregular
receipt of customers' payments together create cash fluctuations. These
are eased somewhat by the overdraft privilege that banks grant to some
companies. The overdraft exercises a regulatory role like that of inventories,
a full backlog of orders, or other reserves: it is the buffering effect
we have already encountered, notably in the case of the great reservoirs
of the ecological cycle.

Food and world population. Two major variables measure world
growth: industrial capital and population. The reservoir of industrial
capital (factories, machines, vehicles, equipment) is filled through investment
and emptied through depreciation, obsolescence, and wear and tear on machines
and equipment. The population reservoir is filled by births and emptied
by deaths (Fig. 58).

If the flow of investment is equal to the flow of depreciation, or if
births equal deaths, a state of dynamic equilibrium is achieved-a stationary
(not static) state called zero growth. What will happen then when
several flow and state variables interact?

Consider a simple model, the well-known Malthusian model described in
classic form. World resources of food grow at a constant rate (a linear,
arithmetic progression), while world population grows at a rate that is
itself a function of population (a nonlinear, geometric progression) (
see notes ) (Fig. 59).

The food reservoir fills at a constant rate, the population reservoir
at an accelerated rate. The control element is represented by the quantity
of food available to each individual (Fig. 60).

A decrease in the food quota per person leads to famine and eventually
an increase in mortality. The demographic curve stabilizes in an S curve,
typical of growth limited by an outside factor (Fig. 61).

Equations corresponding to various state and flow variables can be programmed
on a computer in order to verify the validity of certain hypotheses: what
would happen if the birth rate doubled? if it were reduced by half? if
food production doubled or tripled? The present example is of only limited
interest because it is such a rudimentary model; in the presence of several
hundred variables, however, the simulation presents and achieves, as we
shall see, valuable results.

4. APPLICATIONS OF THE SYSTEMIC APPROACH

Certainly there has been a revolution in our way of thinking; what now
are the practical uses to which we can put it? Beyond the simple description
of the systems of nature it leads to new methods and rules of action-nothing
less, as you will see, than the instruction manual for the macroscope.

Analysis and Synthesis

The analytic and the systemic approaches are more complementary than
opposed, yet neither one is reducible to the other.

The analytic approach seeks to reduce a system to its elementary elements
in order to study in detail and understand the types of interaction that
exist between them. By modifying one variable at a time, it tries to infer
general laws that will enable one to predict the properties of a system
under very different conditions. To make this prediction possible, the
laws of the additivity of elementary properties must be invoked. This is
the case in homogeneous systems, those composed of similar elements and
having weak interactions among them. Here the laws of statistics readily
apply, enabling one to understand the behavior of the multitude-of disorganized
complexity.

The laws of the additivity of elementary properties do not apply in
highly complex systems composed of a large diversity of elements linked
together by strong interactions. These systems must
be approached by new methods such as those which the systemic approach
groups together. The purpose of the new methods is to consider a system
in its totality its complexity, and its own dynamics
Through simulation one can "animate" a system and observe in
real time the effects of the different kinds of interactions among its
elements. The study of this behavior leads in time to the determination
of rules that can modify the system or design other systems.

The following table compares, one by one, the traits of the two approaches.

Analytic Approach

Systemic Approach

isolates, then concentrates on the elements

unifies and concentrates on the interaction between elements

studies the nature of interaction

studies the effects of interactions

emphasizes the precision of details

emphasizes global perception

modifies one variable at a time

modifies groups of variables simultaneously

remains independent of duration of time; the phenomena considered are
reversible.

integrates duration of time and irreversibility

validates facts by means of experimental proof within the body of a
theory

validates facts through comparison of the behavior of the model with
reality

uses precise and detailed models that are less useful in actual operation
(example: econometric models)

uses models that are insufficiently rigorous to be used as bases of
knowledge but are useful in decision and action (example: models of the
Club of Rome)

has an efficient approach when interactions are linear and weak

has an efficient approach when interactions are nonlinear and strong

leads to discipline-oriented (juxtadisciplinary) education

leads to multidisciplinary education

leads to action programmed in detail

leads to action through objectives

possesses knowledge of details poorly defined goals

possesses knowledge of goals, fuzzy details

This table, while useful in its simplicity, is nevertheless a caricature
of reality. The presentation is excessively dualist; it confines thought
to an alternative from which it seems difficult to escape. Numerous other
points of comparison deserve to be mentioned. Yet without being exhaustive
the table has the advantage of effectively opposing the two complementary
approaches, one of which-the analytic approach-has been favored disproportionately
in our educational system.

To the opposition of analytic and systemic we must add the opposition
of static vision and dynamic vision.

Our knowledge of nature and the major scientific laws rests on what
I shall call "classic thought," which has three main characteristics.

Its concepts have been shaped in the image of a "solid" (conservation
of form preservation of volume, effects of force, spatial relations, hardness,
solidity).

Irreversible time, that of life's duration, of the nondetermined, of
chance events is never taken into account. All that counts is physical
time and reversible phenomena. T can be changed to-T without modifying
the phenomena under study.

The only form of explanation of phenomena is linear causality; that
is, the method of explanation relies on a logical sequence of cause and
effect that extends for its full dimension along the arrow of time.

In present modes of thought influenced by the systemic approach, the
concept of the fluid replaces that of the solid. Movement replaces permanence.
Flexibility and adaptability replace rigidity and stability. The concepts
of flow and flow equilibrium are added to those of force and force equilibrium.
Duration and irreversibility enter as basic dimensions in the nature of
phenomena. Causality becomes circular and opens up to finality.[10]

The dynamics of systems shatters the static vision of organizations
and structures; by integrating time it makes manifest relatedness
and development.

Another table may help to enlighten and enrich the most important concepts
related to classic thought and systemic thought (Fig. 62).

Models and Simulation

The construction of models and simulations are among the most widely
used methods of the systemic approach, to the extent that they are often
confused with the systemic approach itself.

Confronted with complexity and interdependence, we all use simple analogical
models. These models, established as part of an earlier analytical approach,
seek to unite the main elements of a system in order to permit hypotheses
concerning the behavior of the system as a whole- by taking into account
as much as possible the interdependence of the factors.

When the number of variables is small, we constantly use such analogical
models to understand a system of which we have little information or to
try to anticipate the responses or reactions of someone with a different
model of the situation. Our vision of the world is a model. Every mental
image is a fuzzy, incomplete model that serves as a basis for decision.

The construction of simple analogical models rapidly becomes impracticable
when large numbers of variables are involved. This is the case with highly
complex systems. The limitations of our brain make it impossible for us
to make a system "live" without the help of computers and simulation
systems, so we turn to these mechanical and electronic means.

Simulation tries to make a system live by simultaneously involving all
its variables. It relies on a model established on the basis of previous
analysis. Systems analysis, model building, and simulation are the three
fundamental stages in the study of the dynamic behavior of complex systems.

Systems analysis defines the limits of the system to be modeled,
identifies the important elements and the types of interactions between
these elements, and determines the connections that integrate the elements
into an organized whole. Elements and types of connections are classified
and placed in hierarchical order. One may then extract and identify the
flow variables, the state variables, positive and negative feedback loops,
delays, sources, and sinks. Each loop is studied separately, and its influence
on the behavior of the different component units of the system is evaluated.

Model building involves the construction of a model from data
provided by systems analysis. One establishes first a complete diagram
of the causal relations between the elements of the subsystem. (In the
Malthusian model on ( page 77 ) these
include the influences of birth rate on population and food rationing on
mortality.) Then, in the appropriate computer language, one prepares the
equations describing the interactions and connections between the different
elements of the system.

Simulation considers the dynamic behavior of a complex system.
Instead of modifying one variable at a time it uses a computer to set in
motion simultaneously groups of variables in order to produce a real- life
situation. A simulator, which is an interactive physical model, can also
be used to give in "real time" the answers to different decisions
and reactions of its user. One such simulator is the flight simulator used
by student pilots. Simulation is used today in many areas, thanks to the
development of more powerful yet simpler simulation language and new interactive
means of communication with the computer (graphic output on cathode ray
tubes, high-speed plotters, input light pens, computer-generated animated
films).

Examples of the applications of simulation are to be found in many fields:
economy and politics-technological forecasting, simulation of conflicts,
"world models"; industrial management-marketing policy,
market penetration, launching a new product: ecology-effects of
atmospheric pollutants, concentration of pollutants in the food chain;
city planning growth of cities, appearance of slums, automobile
traffic; astrophysics- birth and evolution of the galaxies, "experiments"
produced in the atmosphere of a distant planet; physics-the flow
of electrons in a semiconductor, resistance of materials, shock waves,
flow of liquids, formation of waves; public works-silting-in of
ports, effects of wind on high-rise buildings; chemistry-simulation
of chemical reactions, studies of molecular structure; biology-circulation
in the capillaries, competitive growth between bacterial populations, effects
of drugs, population genetics, data processing-simulation of the
function of a computer before its construction; operational research-problems
of waiting lines, optimization, resource allocation, manufacturing control;
engineering-process control, calculations of energy costs, calculations
of construction costs; education -simulated pedagogical practices,
business games.

Despite the number and diversity of these applications, too much cannot
be expected of simulation. It is only one approach among many, a complementary
method of studying a complex system. Simulation never gives the optimum
or the exact solution to a given problem. It only sets forth the general
tendencies of the behavior of a system -its probable directions of evolution
- while suggesting new hypotheses. ( see
notes )

One of the serious dangers of simulation results from too much freedom
in the choice of variables. The user can change the initial conditions
"just to see what will happen." There is the risk of becoming
lost in the infinity of variables and the incoherent performances associated
with chance modifications. The results of simulation must not be confused
with reality (as is often the case) but, compared with what one knows of
reality, should be used as the basis for the possible modification of the
initial model. When one continues to use such a process in successive approximations,
the usefulness of simulation will become apparent.

Simulation appears to be one of the most resourceful tools of the systemic
approach. It enables us to verify the effects of a large number of variables
on the overall functioning of a system; it ranks the role of each variable
in order of importance; it detects the points of amplification or inhibition
through which we can influence the behavior of the system. The user can
test different hypotheses without running the risk of destroying the system
under study-a particularly important advantage in the case of living systems
or those that are fragile or very costly.

Knowing that one can experiment on a model of reality rather than on
reality itself, one can influence the time variable by accelerating very
slow phenomena (social phenomena, for example) or slowing down ultrafast
phenomena (the impact of a projectile on a surface). One can influence
equally well the space variable by simulating the interactions that occur
in very confined volumes or over great distances.

Simulation does not bring out of the computer, as if by magic, more
than what was put into the program. The contribution of the computer rests
at a qualitative level. Handling millions of bits of information in a tiny
fraction of time, it reveals structures, modes, and tendencies heretofore
unobservable and which result from the dynamic behavior of the system.

Interaction between user and model develops a feeling of the effect
of interdependencies and makes it possible to anticipate better the reactions
of the models. Evidently this feeling exists for all those who have had
long experience in the management of complex organizations. One of the
advantages of simulation is that it allows the more rapid acquisition of
these fundamental mechanisms.

Finally, simulation is a new aid to decision making. It enables one
to make choices among "possible futures." Applied to social systems,
it is not directly predictive. (How does one take into account such impossible-to-quantify
data as well-being, fear, desire, or affective reactions?) Yet simulation
does constitute a sort of portable sociological laboratory with which experiments
can be made without involving the future of millions of men and without
using up important resources in programs that often lead to failure.

Certainly models are still imperfect. As Dennis Meadows observed, however,
the only alternatives are "mental models" made from fragments
of elements and intuitive thinking ( see
notes ). Major political decisions usually rest on such mental models.

The Dynamics of Maintenance and Change

The properties and the behavior of a complex system are determined by
its internal organization and its relations with its environment. To understand
better these properties and to anticipate better its behavior,

it is necessary to act on the system by transforming it or by orienting
its evolution.

Every system has two fundamental modes of existence and behavior: maintenance
and change. The first, based on negative feedback loops, is characterized
by stability. The second, based on positive feedback loops, is characterized
by growth (or decline.) The coexistence of the two modes
at the heart of an open system, constantly subject to random disturbances
from its environment, creates a series of common behavior patterns. The
principal patterns can be summarized in a series of simple graphs by taking
as a variable any typical parameter of the system (size, output, total
sales, number of elements) as a function of time (Fig. 63).[11]

Dynamic stability: equilibrium in motion. Maintenance is duration.
Negative controls, by regulating the divergences of positive loops, help
to stabilize a system and enable it to last. Thus the system is capable
of self-regulation.

To bring together stability and dynamics might seem to be paradoxical.
In fact the juxtaposition demonstrates that the structures or the functions
of an open system remain identical to themselves in spite of the continuous
turnover of the components of the system. This persistence of form is dynamic
stability. It is found in the cell, in the living organism, in the flame
of a candle.

Dynamic stability results from the combination and readjustment of numerous
equilibriums attained and maintained by the system-that of the "internal
milieu" of the living organism, for example (see p. 42). We deal with
dynamic equilibriums; this imposes a preliminary distinction between balance
of force and balance of flow.

A balance of force results from the neutralization at the same
point of two or more equal and opposed forces. This might be illustrated
by two hands immobilized in handcuffs or by a ball lying in the bottom
of a basin (Fig. 64).

When there are two forces present-two armies or two governments- we
speak of the "balance of power." But a balance of force is a
static equilibrium; it can be modified only as the result of a discontinuous
change in the relationship of the forces. This discontinuity could lead
to an escalation when one force overpowers the other.

On the other hand, a balance of flow results from the adjustment
of the speeds of two or more flows crossing a measuring device. Equilibrium
exists when the speeds of the flows are equal and moving in opposite directions.[12]
This is the case of a transaction at a sales counter, where merchandise
is exchanged for money (Fig. 65).

A balance of flow is a dynamic equilibrium. It can be adapted, modified,
and modeled permanently by often imperceptible readjustments, depending
on disturbances or circumstances. The balance of flow is the foundation
of dynamic stability.

When equilibrium is achieved, a given "level" is maintained
over time (like the concentration of certain molecules in the plasma, or
the state of a bank account ( see page 74
). This particular state is called a steady state; it is very
different from the static state represented by the level of water
in a reservoir having no communication with the environment (Fig. 66).

There are as many steady states as there are levels of equilibrium at
different depths of a reservoir. This makes it possible for an open system
to adapt and respond to the great variety of modifications in the environment.

Homeostasis: resistance to change. Homeostasis is one of the
most remarkable and most typical properties of highly complex open systems.
The term was created by the American physiologist Walter Cannon in 1932
( see page 43 ). A homeostatic system
(an industrial firm, a large organization, a cell) is an open system that
maintains its structure and functions by means of a multiplicity of dynamic
equilibriums rigorously controlled by interdependent regulation mechanisms.
Such a system reacts to every change in the environment, or to every random
disturbance, through a series of modifications of equal size and opposite
direction

to those that created the disturbance. The goal of these modifications
is to maintain the internal balances.

Ecological, biological, and social systems are homeostatic. They oppose
change with every means at their disposal. If the system does not succeed
in reestablishing its equilibriums, it enters into another mode of behavior,
one with constraints often more severe than the previous ones. This mode
can lead to the destruction of the system if the disturbances persist.

Complex systems must have homeostasis to maintain stability and to survive.
At the same time it bestows on the systems very special properties. Homeostatic
systems are ultrastable; everything in their internal, structural, and
functional organization contributes to the maintenance of the same organization.
Their behavior is unpredictable; "counterintuitive" according
to Jay Forrester, or contravariant: when one expects a determined reaction
as the result of a precise action, a completely unexpected and often contrary
action occurs instead. These are the gambles of interdependence and homeostasis;
statesmen, business leaders, and sociologists know the effects only too
well.

For a complex system, to endure is not enough; it must adapt itself
to modifications of the environment and it must evolve. Otherwise outside
forces will soon disorganize and destroy it. The paradoxical situation
that confronts all those responsible for the maintenance and evolution
of a complex system, whether the system be a state, a large organization,
or an industry, can be expressed in the simple question, How can a stable
organization whose goal is to maintain itself and endure be able to change
and evolve?

Growth and Variety. The growth of a complex system-growth in
volume, size, number of elements-depends on positive feedback loops and
the storage of energy. In effect a positive feedback loop always acting
in the same direction leads to the accelerated growth of a given value
( see page 73 ). This value can be number
(population growth), diversity (variety of elements and interactions
between elements), or energy (energy surplus, accumulation of profits,
growth of capital).

The positive feedback loop is equivalent to a random variety generator.
It amplifies the slightest variation; it increases the possibilities of
choice, accentuates differentiation, and generates complexity by increasing
the possibilities for interaction.

Variety and complexity are closely allied. Variety, however, is one
of the conditions for the stability of a system. In fact homeostasis can
be established and maintained only when there is a large variety of controls.
The more complex a system, the more complex its control system must be
in order to provide a "response" to the multiple disturbances
produced by the environment. This is the law of requisite variety proposed
by Ross Ashby in 1956 ( see notes ).
This very general law asserts in mathematical

form that the regulation of a system is efficient only when it depends
on a system of controls as complex as the systemitself .
In other words, control actions must have a variety equal to the variety
of the system. In ecology, for example, it is the variety of species, the
number of ecological niches, the abundance of interactions among species
and between community and environment that guarantee the stability and
continuance of the community. Variety permits a wider range of response
to potential forms of aggression from the environment.

The generation of variety can lead to adaptations through increase in
complexity. But in its confrontation with the random disturbances of the
environment, variety also produces the unexpected, which is the
seed of change. Growth is then both a force for change and a means for
adapting to the modifications of the environment. Here one begins to see
the way in which a homeostatic system can evolve as a system constructed
to resist change. It evolves through a complementary process of total or
partial disorganization and reorganization. This process is produced either
by the confrontation of the system with random disturbances from the environment
(mutations, events, "noise") or in the course of readjustment
of an imbalance (resulting, for example, from too rapid growth).

Evolution and emergence. Living systems can adapt, within certain
limits, to sudden modifications coming from the outside world. A system
actually has detectors and comparators that enable it to detect signals
from within or without and to compare the signals to equilibrium values.
When there are discrepancies, the emission of error signals can help to
correct them. If it cannot return to its former state of homeostatic equilibrium,
the system, through the complementary play of positive and negative feedback
loops, searches for new points of equilibrium and new stationary states.

The evolution of an open system is the integration of these changes
and adaptations, the accumulation in time of successive plans or "layers"
of its history.[13]
This evolution materializes through hierarchical levels of organization
and the emergence of new properties. The prebiological evolution (the genesis
of living systems) and the biological and social evolutions are examples
of evolution toward levels of increasing complexily. At each level new
properties "emerge" that cannot be explained by the sum of the
properties of each of the parts which constitute the whole. There is a
qualitative leap; the crossing of a threshold; life, reflective thought,
and collective consciousness.

Emergence is linked to complexity. The increase in the diversity of
elements, in the number of connections between these elements, and in the
play of nonlinear interactions leads to patterns of behavior that are difficult
to predict - especially if they are founded solely on the properties of
the elements. We know, for example, the properties of each of the amino
acids that make up the protein chain. But because of the convolutions of
this chain, certain amino acids that are far apart in the sequence find
themselves together in space. This situation gives the protein emergent
properties that enable it to recognize certain molecules and to catalyze
their transformation. This would be impossible if the amino acids were
present in the milieu but not arranged in the proper order-or if the chain
were straightened out.

The "Ten Commandments" of the Systemic Approach

The systemic approach has little value if it does not lead to practical
applications such as facilitating the acquisition of knowledge and improving
the effectiveness of our actions. It should enable us to extract from the
properties and the behavior of complex systems some general rules for understanding
systems better and acting on them.

Unlike the juridical, moral, or even physiological laws which one might
still cheat, a misappreciation of some of the basic systemic laws could
result in serious error and perhaps lead to the destruction of the system
within which one is trying to act. Of course many people will have an intuitive
knowledge of these laws, which are very much the result of experience or
simple common sense. The following are the "ten commandments"
of the systemic approach.

1. Preserve variety. To preserve stability one must preserve
variety. Any simplification is dangerous because it introduces imbalance.
Examples abound in ecology. The disappearance of some species as a consequence
of the encroaching progress of "civilization" brings the degradation
of the entire ecosystem. In some areas intensive agriculture destroys the
equilibrium of the ecological pyramid and replaces it with an unstable
equilibrium of only three stages (grain, cattle, and man) controlled by
a single dominant species. This unbalanced ecosystem tries spontaneously
to return to a state of higher complexity through the proliferation of
insects and weeds-which farmers prevent by the widespread use of pesticides
and herbicides.

In economy and in management, excessive centralization produces a simplification
of communication networks and the impoverishment of the interactions between
individuals. There follow disorder, imbalance, and a failure to adapt to
rapidly changing situations.

2. Do not "open" regulatory loops. The isolation of
one factor leads to prompt actions, the effects of which often disrupt
the entire system. To obtain a short-term action, a stabilizing loop or
an overlapping series of feedback loops is often "cut open"-in
the belief that one is acting directly on the causes in order to control
the effects. This is the cause of sometimes dramatic
errors in medicine, economy, and ecology.

Consider some examples of what happens in the rupture of natural cycles.
The massive use of fossil fuels, chemical fertilizers, or nonrecyclable
pesticides allows the agricultural yield to grow in the short term; in
the long term this action may bring on irreversible disturbances. The fight
against insects leads as well to the disappearance of the birds that feed
on the insects; the result in the long term is that the insects return
in full force-but there are no birds. The states of waking, sleeping, and
dreaming are probably regulated by the delicate balance between chemical
substances that exist in the brain; by regularly introducing, for short-term
effect, an outside foreign molecule such as a sleeping pill, the natural
long-term mechanisms are inhibited-worse, there is the danger of upsetting
them almost irrevocably: persons accustomed to using barbiturates must
undergo a veritable detoxification in order to return to a normal sleep
pattern.

3. Look for the points of amplification. Systems analysis and
simulation bring out the sensitive points of a complex system. By acting
at this level, one releases either amplifications or controlled inhibitions.

A homeostatic system resists every measure, immediate or sequential
(that is, waiting for the results of preceding measures in order to take
on new ones). One of the methods that influence the system and cause it
to evolve in a chosen direction is the use of a policy mix. These
measures must be carefully proportioned in their relationships and applied
simultaneously at different points of influence.

One example is the problem of solid wastes. There are only three ways
to reduce the flow of the generation of solid wastes by acting on the valve
(the flow variable): reducing the number of products used (which would
mean a drop in the standard of living), reducing the quantity of solid
wastes in each product, or increasing the life expectancy of the products
by making them more durable and easier to repair. The simulations performed
by Jorgan Randers of MIT show that no one measure alone is enough (
see notes ). The best results came from a policy mix, a combination
of measures used at the same time: a tax of 25 percent on the extraction
of nonrenewable resources, a subsidy of 25 percent for recycling, a 50
percent increase in the life of the products, a doubling of the recyclable
portion per product, and a reduction in primary raw material per product
(Fig. 67).

4. Reestablish equilibriums through decentralization. The rapid
reestablishment of equilibriums requires the detection of variances where
they occur and corrective action that is carried out in a decentralized
manner.

The correction of the body's equilibrium when we stand is accomplished
by the contraction of certain muscles without our having to think
about it even when the brain intervenes. Enzymatic regulation networks
show that the entire hierarchy of levels of complexity intervene in the
reestablishment of balance ( recall the example
of the service station on page 51 ). Often corrective action has been
taken even before one has been made conscious of taking it. The decentralization
of the reestablishment of equilibriums is one application of the law of
requisite variety. It is customary in the body, the cell, the ecosystem.
But so far it appears that we have not succeeded in applying this law to
the organizations that we have been assigned to manage.

5. Know how to maintain constraints. A complex open system can
function according to different modes of behavior. Some of them are desirable;
others lead to the disorganization of the system. If we want to maintain
a given behavior that we consider preferable to another, we must accept
and maintain certain kinds of constraints in order to keep the system from
turning toward a less desirable or a dangerous mode of behavior.

In the management of the family budget one can choose a high style of
living (living beyond one's means), with the constraints that it implies
with respect to banks and creditors. Or one can choose to limit expenditures
and do without goods one would like to possess-a different set of constraints.

In the case of a nation's economy, those responsible for the economic
policy choose and maintain the constraints that result from inflation with
all their injustices and social inequalities-for they are judged a lesser
evil than those brought about by unemployment.

At the level of the world economy the growth race entails social inequalities,
depletion of resources, and pollution. Theoretically, however, it allows
a more rapid increase in the standard of living. The transition to
a "stationary" economy would imply the choice of new constraints,
founded on privation and a reduction in the standard of living and the
imposition of more complex, more delicate, and more decentralized forms
of control and regulation than in a growth economy. These means would call
for increased responsibility on the part of each citizen.

Liberty and autonomy are achieved only through the choice and application
of constraints; to want to eliminate constraints at any price is to risk
moving from an accepted and controlled state of constraint to an uncontrollable
state that will lead rapidly to the destruction of the system.

6. Differentiate to integrate better. Every real integration
is founded on a previous differentiation. The individuality, the unique
character of each element is revealed in the organized totality. This is
the meaning of Teilhard de Chardin's famous phrase, "union differentiates."
This law of the "personalizing" union is illustrated by the specialization
of cells in the tissues or the organs of the body.

There is no true union without antagonism, balance of power, conflict.
Homogeneity, mixture, and syncretism are forms of entropy. Only union through
diversity is creative; it increases complexity and leads to higher levels
of organization. This systemic law and its allied constraints are well
known by those whose purpose is to unite, to assemble, to federate. Antagonism
and conflict are always born of the transition to a unified entity. Before
regrouping diversities, we must decide to what limits we should push the
process of personalization. Pushed too soon, it leads to an homogenizing
and paralyzing mixture; pushed too late, it leads to the confrontation
of individualism and personality-and perhaps a disassociation still greater
than what had formerly existed.

7. To evolve, allow aggression. A homeostatic (ultrastable) system
can evolve only if it is assaulted by events from the world outside. An
organization must then be in a position to capture the germs of change
and use them in its evolution-which obliges it to adopt a mode of functioning
characterized by the renewal of structures and the mobility of men and
ideas. In effect all rigidity, sclerosis, and perpetuity of structures
or hierarchy is clearly opposed to a system that allows evolution (
see notes ).

An organization can maintain itself in the manner of a crystal or that
of a living cell. The crystal preserves its structure by means of the balance
of forces that cancel out each other in every node of the crystalline network-and
by redundancy, or repetition of patterns. This static state, closed to
the environment, allows no resistance to change within its milieu: if the
temperature rises, the crystal becomes disorganized and melts. The cell,
however, is in dynamic equilibrium with its environment. Its organization
is founded not on repetition but on the variety of its elements. An open
system, it maintains a constant turnover of its elements. Variety and mobility
enable it to adapt to change.

The crystal-like organization evolves slowly in the give and take of
radical and traumatic reforms. The cell-like organization tries to make
the most of events, variety, and the openings into the outside world. It
is not afraid of a passing disorganization-the most efficient condition
for readaptation. To accept this transitory risk is to accept and to want
change. For there is no real change without risk.

8. Prefer objectives to detailed programming. The setting of
objectives and rigorous control-as opposed to detailed programming at every
step-is what differentiates a servomechanism from a rigidly programmed
automatic machine. The programming of the machine must foresee all disturbances
likely to occur in the course of operation. The servomechanism, however,
adapts to complexity; it needs only to have its goal set without ambiguity
and to establish the means of control that will enable it to take corrective
measures in the course of action.

These basic principles of cybernetics apply to every human organization.
The definition of objectives, the means of attaining them, and the determination
of deadlines are more important than the detailed programming of daily
activities. Minutely detailed programming runs the risk of being paralyzing;
authoritarian programming leaves little room for imagination and involvement.
Whatever roads are taken, the important thing is to arrive at the goal-provided
that the well-defined limits (necessary resources and total time allotted
to operations) are not exceeded.

9. Know how to use operating energy. Data sent out by a command
center can be amplified in significant proportions, especially when the
data are relayed by the hierarchical structures of organizations or by
diffusion networks.

At the energy level the metabolism of the operator of a machine is negligible
compared to the power that he can release and control. The same applies
to a manager or to anyone in charge of a large organization. We must distinguish,
then, between power energy and operating energy. Power energy
is represented by the electric line or the current that heats a resistance;
or it may be the water pipe that carries water pressure to a given point.
Operating energy renders itself in the action of the thermostat or the
water tap: it represents information.

A servomechanism distributes its own operating energy through the distribution
of information that commands its operational parts. In the same way the
leader of an organization must help his own system to distribute its operating
energy. To accomplish this he establishes feedback loops to the decision
centers. In the management of an industry or in the structure of a government,
these regulatory loops are called selfmanagement (autogestion),
participation, or social feedback.[14]

10. Respect response times. Complex systems integrate time into
their organization. Each system has a response time characteristic of that
system, by reason of the combined effects of feedback loops, delays at
reservoirs, and the sluggishness of flows. In many cases, especially in
industry, it is useless to look for speed of execution at any price, to
exert pressure in order to obtain responses or results. It is better to
try to understand the internal dynamics of the system and to anticipate
delays in response. This type of training is often acquired in the actual
running of large organizations. It gives rise to a sense of timing,
the knowing when to begin an action, neither too soon nor too late, but
at the precise moment the system is ready to move in one direction or the
other. Sense of timing allows the best possible use of the internal energy
of a complex system-rather than to have to impose instructions from outside
against which the system will react.

Avoiding the Dangers of the Systemic approach

To be useful, the systemic approach must be demystified; what is useful
in daily life must not be reserved for a small elite. The hierarchy of
disciplines established in the nineteenth century, from the "most
noble" sciences (mathematics and physics) to the "least noble"
(the sciences of man and society), continues to weigh heavily on our approach
to nature and our vision of the world. Skepticism or distrust of the systemic
approach is found among those-mathematicians and physicists-who have received
the most advanced theoretical training. At the same time, those who by
nature of their research have been accustomed to think in terms of flow,
transfer, exchange, and irreversibility-biologists, economists, and ecologists-assimilate
more naturally the systemic concepts and communicate more easily among
themselves.

To demystify further the systemic approach and to enable it to remain
a transdisciplinary attitude, a training in the mastery of
complexity and interdependence, it may be necessary to get rid of the very
terms systemicapproach and systemic method. The global
vision is not reserved for the few with wide responsibility-the philosophers
and the scientists. Each one of us can see things in perspective. We must
learn to look through the macroscope to apply systemic rules, to construct
more rigorous mental models, and perhaps to master the play of interdependencies.

And we must not hide the dangers of a too systematic use of the systemic
approach. A purely descriptive approach-the "what is linked to what?"
method-leads rapidly to a collection of useless models of the different
systems of nature. The greatest generalization of the concept of system
can also turn against itself, destroying its fecundity in sterilizing platitude.
In the same way the uncontrolled use of analogies, homologies, and
isomorphisms can result in interpretations that complicate rather than
enlighten. Such interpretations are founded on superficial resemblances
rather than on principles and fundamental laws that are common to all systems.
According to Edgar Morin, "too much unification can become abusive
simplification, then an idée fixe or a turn of phrase"
( see notes ).

Once again we are encumbered with the danger of dogmatism. The systemic
approach leads to an intransigent systematism or a reductionist biologism.
There is danger of our being seduced by models that were conceived as ends
of reflective thought, not as points of departure for research. We are
tempted by the too simplistic transposition of models or biological laws
to society.[15]
The cybernetics of regulation at the molecular level offers general models,
some aspects of which are transposable, with certain restrictions, to social
systems. The greatest weakness of these models is that they apparently
cannot take into account the relationship between force and the conflicts
that arise between the elements of every socioeconomic system. The economist
J. Attali remarked on this at a meeting of the Group of Ten devoted to
the maintenance of biological and social equilibriums: "Unlike the
sociologist, the biologist observes systems with well-established laws:
they do not change as they are being studied. As for molecules, cells,
or microbes, they will never complain of their condition!"

One of the greatest dangers that menace the systemic approach is the
temptation of the "unitary theory," the all-inclusive model with
all the answers and the ability to predict everything. The use of mathematical
language, which by nature and vocation generalizes, can lead to a formalism
that isolates the systemic approach instead of opening it up to the practical.
The General System Theory does not escape this danger. Sometimes it becomes
locked into the language of graph theory, set theory, game theory, or information
theory; sometimes it is nothing more than a collection of descriptive approaches
that are often illuminating but have no practical application.

The functional systemic approach offers one way of bypassing these alternatives.
It avoids the dangerous stumbling blocks of paralyzing reductionism and
total systematism; it clears the way for the communication of knowledge,
for action, and for creation. For the communication of knowledge because
the systemic approach has a conceptual framework

of reference that helps to organize knowledge as it is acquired, reinforces
its memorization, and facilitates its transmission. For action because
the systemic approach provides rules for confronting complexity and because
it assigns to their hierarchical order the elements that are the basis
for decisions. And for creation because the systemic approach catalyzes
imagination, creativity, and invention. It is the foundation of inventive
thought (where the analytical approach is the foundation of knowledgeable
thought). Tolerant and pragmatic, systemic thought is open to analogy,
metaphor, and model-all formerly excluded from "the scientific method"
and now rehabilitated. Everything that unlocks knowledge and frees imagination
is welcomed by the systemic approach; it will remain open, like the systems
it studies.

The earth shelters the embryo of a body and the beginnings of a spirit.
The life of this body is maintained by the great ecological and economic
functions brought together in the ecosphere. Collective consciousness emerges
from the simultaneous communication of men's brains; it constitutes the
noosphere ( see notes ).

Ecosphere and noosphere have energy and information as a base. Action
is the synthesis of energy and information. But all action requires time.
Thus time is the link between energy, information, and action. The following
chapters will be devoted to such a global approach toward energy, information,
and time-through trying to envisage old problems from a new perspective.

[1]
Bionics attempts to build electronic machines that imitate the functions
of certain organs of living beings.

[2]
IBM subsequently used such memories in all its computers. This type of
memory (for which Forrester still holds all major patents) is in the process
of being replaced by semiconductor memories. (The former type is still
found in most computers today.

[4]
The word was first used by Plato in the sense of "the art of steering"
or "the art of government." In 1834 Ampere used the word cybernetics
to denote "the study of ways of governing."

[5]
I do not consider here systems of concept or mechanical systems run by
man, but instead systems of high complexity, such as living, social, or
ecological systems.

[6]
The variety defined by W. Ross Ashby is "the number of different elements
that make up a system or the number of different relationships between
these elements or the number of different states of these relationships."
The variety of a relatively simple system, made up of seven elements connected
by two-way relationships and having two different sets of conditions, will
be expressed by the enormous number of 242. What can be said
of these interactions woven together in the heart of the cellular population
( see page 48 ) and in much greater number
in the heart of society?

[7]
This symbolic representation was inspired by the one developed by Jay Forrester
and his group at MIT in simulation models (
see notes ).

[9]
This state of equilibrium is accomplished even though the account is emptied
and refilled every month. (One could assume that in effect wages were being
paid and deposited daily.)

[10]
Numerous points only mentioned here will be taken up again in the following
chapters.

[11]
It must be remembered that the overall behavior of the system is the result
of the individual behavior patterns of subsystems, patterns themselves
determined by the interconnection of a large number of variables.

[12]
Or when flows have opposite effects even though they move in the same direction
(as a reservoir filling and emptying at the same time).

[15]
The danger of too direct transpositions from the biological to the social
realms were clearly perceived by Friedrich Engels when he wrote to the
Russian sociologist and journalism Piotr Lavrov in 1875: "The essential
difference between human socially and animal socially is that animals,
al best, collect while men produce This unique but major difference prohibits
in its own right the transfer pure and simple of the laws of animal societies
to the social systems of men '' ( see notes
). The work of A. J. Lotka in 1925 on the dynamics of population and
the work of V. Volterra in 1931 on he mathematical theory of the life struggle
have subsequently shown that we must be less dogmatic than Engels with
respect to transfers from the biological to the social realms.