"Bring me into the company of those who seek the truth
&middot &middot &middot and deliver me from those who have found it."

Motivation: &nbsp As I slowly die from bone-marrow cancer,
what hurts most is watching the world go to hell,
and knowing that most Americans --
by our complicity in producing the status quo --
remain part of the problem, rather than part of the solution.
Having changed the world (with tiny groups of other people)
in ways that are observable at the national level, I understand the
power -- and therefore, the responsibility -- that each
person has. No one can know in advance which
(if any) of her efforts will bear fruit, but no one can
justify inaction with the excuse that, "You can't change the world".

Indeed, our daily actions as Americans -- habits as mundane as
driving and working at certain jobs -- are changing the world
&middot &middot &middot for the worse. Because behavioral habits
both inculcate and reflect habits of mind, it's vital that we learn how our
mental patterns are shaped by our technologies and by
our society's information environment, and how
those, in turn, are shaped by Information Warfare.
I'm building these web pages in hopes that some of what I've learned
will be useful to those who are strong enough to accept our
responsibility for social change, and who are honest enough
to understand the urgent need to change things
for the better -- soon -- before we pass irreversible tipping points.

This page introduces some ideas, and links to some of my writings.
The unifying theme is the political philosophy
of technologies; in particular, the psychology and the
ethics* of various social or technological alternatives.
A special focus is the relation between dysfunctional democracy
and domestic information warfare.
This is real-world stuff; not useless academic theory.
The enormous changes enabled by Information Technology -- coupled with our
failure to advance Information Ethics -- have become central
to how we live, and to who we are -- as individuals and as societies.

*"Ethics" and "morality"sometimes are used in attempts
to manipulate people. My objective is the opposite -- how and why not
to manipulate others. I hope this material will raise awareness of the
process and the consequences when we allow ourselves to be manipulated,
and when we engage in manipulation (perhaps without fully intending it).
More awareness may prompt us to pursue genuine ethical questions:
&nbsp What is a good way of life in this technologically-transformed world? &nbsp
How can we negotiate new Social Contracts to make our society more free of manipulation, so that more people (both manipulators and manipulatees)
can live a better life?
To the "realistic" domestic Information Warriors who think me naively idealistic:
&nbsp I've "been there, done that". I was effective enough that in 1992, the Governor tried -- and failed -- to fire me from the Univ. of Calif.
My argument is pragmatic: &nbsp If someone uses unethical Info-war tactics,
others feel justified in using more unethical tactics. And although
some of us can win at that game -- temporarily -- it's only with major
collateral damage and blowback.
In the long run, none of us truly can win a race to the bottom;
and we can't even break even. (The only thing we can break is the playing field,
and that's exactly what's been happening to our politics, our economics,
and our environment.)
So we need to "stand down" and explore how to disengage --
to quit the manipulative Info-Wargames -- and transition to forms of competition
that are more productive as a society, and that support (rather than degrade)
our integrity as individuals.

A recurring theme is the limitations of our mental models --
including words, languages, concepts, logical abstractions,
and ideologies (political, economic, religious, and scientific).
Every model is an abstraction tool that both organizes,
and distorts reality.
A model simplifies, divides into foreground (signal) and background (noise),
imposes boundaries between objects and distinctions between categories, and
defines what "legitimate" things exist. Every model channels our attention,
trains our perceptions, and colors our thoughts, emotions, and memories.
Henry David Thoreau summarized the relationship between society and technology
in this quote, but it applies to every model, especially
language (arguably our most problematic technology):

In a very real sense, we do not define the words and concepts we use;
rather, we are defined by the words and concepts
that use us.
The notion of a unified, conscious, sovereign "self" is especially
problematic. Not only does this impoverish our inner psychological experience
and distort our external relations with other individuals.
Worse, it blinds us to our own cognitive vulnerabilities: &nbsp
By denying the very existence of such security flaws, it interferes
with our ability to understand and implement our own "free will" or "self-interest".
Such unwarranted assumptions about "human nature" and moral agents
render dubious the real-world relevance of our ethical theories.
I'm acutely aware of the risks in applying models of computation
to human minds. Yet it is instructive to consider society not only as
people who "freely choose" to run certain mental programs,
but also as programs running people. A computer security perspective
of a multi-brain grid of multi-processor brain architectures -- in which
individual people are involved in collective social computations -- carries
many implications for integrity, as emotional "buttons",
cognitive vulnerabilities, and "noisy" communication channels are exploited to
amplify social, economic, or political power by herding semi-captive "Bot-nets"
of human minds. (The computational integrity of such "dittoheads" may be
compromised by steadfast political or religious ideologues,
or by more flexible, ideologically-opportunistic Info-warlords.)

An individual may initiate most mental programs of his own free will.
But even mental programs one creates personally, typically will call
on "library routines" of shared "cultural software".
[SocialSW]
How safe is it, to assume that
all such mental software will respect our individual autonomy, and not
engage in covert "perception management" by
manipulating the desirability of options we then "freely choose"?
How vulnerable is each particular individual to what might be viewed as
the (irrational and conflict-ridden)
"central planning" of consciousness? &nbsp
(Note: some "central planning" is intentional.
But collective cultural software often evolves with little, if any,
conscious intent, and the context for which that software tool evolved
to be useful, may no longer exist.)

Most political questions and debates arise from different viewpoints on the
nature of the individual ("human nature"), and the
relation between the individual and the collective (e.g, society, or the state).
We can gain new insights by viewing the individual "embedded"
[embed]
in a society
as running a dynamic, shared Operating System (OS) -- namely,
"cultural software", of which there are many ideological variants.
And (as with Microsoft) many users get their OS automatically-updated
without their knowledge or consent. Even worse, our shared OS's
are not under any "rational" centralized control, but are unplanned,
emergent artifacts that are modified by our own domestic Info Wars!

Each individual exhibits "bounded rationality" under some circumstances,
but also has security vulnerabilities that are frequently exploited
(via political disinformation campaigns, emotional "button-pushing",
consumer advertising, etc). I argue that many individuals have
evolved, converged, or been pushed to degenerate, into states where
their cultural/ideological OS is configured for optimal stability against
ideological attacks.
[escalation]
But the price of that stability is our denial
(or self-censorship, or semi-passive "ignore-ance") of
incoming information that conflicts with our stable ideological world-views,
and a political polarization that is collectively dysfunctional.
(Hence our failure to adequately perceive and respond to the obvious
risks of climate change, debt bubbles, peak oil, and peak food.)
In America, our domestic Info-warfare -- our "culture wars", "political conflicts",
and "economic competition" -- have caused
the main ideological variants of our culture to enter stable disease states:
cultural pathology.
[C.path]
Lemmings are not the only mammal vulnerable to a destructive herd mentality --
the "madness of crowds" (and more sparsely-distributed groups)
occurred long before modern Bot-herders:

"Men ... think in herds; ... they go mad in herds,
while they only recover their senses slowly, and one by one."
&nbsp &nbsp &nbsp &nbsp &nbsp &nbsp &nbsp &nbsp &nbsp --
Charles MacKay: "Extraordinary Popular Delusions and the Madness of Crowds", 1841.

Why might you want to read this material? Here's a brief response:
The world is in big trouble. Because of our economic and technological
trajectories, humans worldwide have become dangerously interdependent.
We live in a crowded world that is increasingly unstable. Behaviors and
ways of life that formerly were ok, are now rocking our overloaded lifeboat.
In 1950, world population was 2.5 billion, of whom 20% were malnourished.
In 2006, world population was about 6.5 billion, of whom 57% were malnourished.
[mal]
(Malnutrition is defined as
insufficient calories, and/or insufficient essential vitamins and minerals.)
Thus, 7 times as many people now suffer malnutrition.
Is it "progress", that of these 4 billion additional people, 80% suffer malnutrition?
Despite a decade-long program by the U.N. Food and Agricultural Organization,
and a significant increase in average food consumption, the global economy
failed to equitably distribute benefits: the number of "food insecure" people
rose to 830 million.
In 2006, according to one U.N. authority, 36 million people died of starvation or
malnutrition. If correct, that was 58% of all human deaths globally.
[starv]
"They" have guns, and they have their own ideas about how they want to live.
Many of their ideas are unrealistic -- this world lacks the resources for
them to live like us &middot &middot &middot in fact, this world lacks the resources
for us to live like us, for much longer.
Some of "them" have coal, oil, or other fossil-fuel-based weapons of mass destruction.
But for humanity to continue using such technologies on a large scale -- and worse,
to increase our global
climate-disrupting emissions -- is to risk mutual assured destruction.
The human species probably will survive; its "civilizations" may not.

As individuals and as nations, our fates are linked via the global economy
and environment. But despite (and partly because of) the rapid
material "progress" (for some), our abilities to perceive and manage risk
have not kept pace with the new challenges.
Technologies have amplified our "freedoms" and our powers.
But this rapid change -- coupled with our unquestioning _faith_
that every technological change constitutes "progress" --
has distracted us -- as individuals, institutions,
and cultures -- from recognizing and growing to accept
our correspondingly-increased moral responsibilities.
How quickly can humanity -- collectively -- develop sufficient
cultural maturity to handle these challenges and risks,
despite the cognitive vulnerabilities of each and every individual
to information warfare? How can responsible and altruistic individuals
nudge our cultural software out of its diseased state, and induce rapid
cultural evolution toward sustainable ways of life?

"Everyone is involved in politics &middot &middot &middot
a few people do politics;
everyone else has politics done to them."
Information Warfare is now pervasive; it is a frequent tool
of those who do politics,
and those who do economics.
Those who do Info-war already know a lot about it.
But most of those who have Info-war done to them
know little about it. Or worse,
Info-war has led them to believe they are not vulnerable to Info-war.
We're familiar with the idea that, "What you don't know can kill you".
But in today's heavily interdependent world, what you don't know,
can kill me &middot &middot &middot even if I live in another country.
(What you don't know can be especially dangerous to me,
if what you have been induced to "know"
about me or my society is not true.)
Your cognitive vulnerability to Info-war is not
merely your personal problem; it has become a
globalized public health problem.
In the short term, I might be able to manipulate you to serve my purposes.
But likewise, my opponents might be able to exploit your vulnerabilities
to work against my purposes.
In the long run, my security is improved when you learn
to better recognize all Info-war attacks, and defend your
cognitive integrity against them.

I'm not just using "Info-war" as a new name for old things.
Rather, Info-war serves both as an organizing principle, and a "lens"
to provide deeper insight into the structures and relationships
between old things (and "new improved" Info-war technologies).
Hopefully more people can learn from Info-war perspectives,
and this may help us find realistic paths
to a better future.
One important insight is that, as practiced within the U.S.,
information warfare has caused tremendous "collateral damage"
to the shared "semantic infrastructure" that is necessary to support
a functioning democracy. Whatever the original intentions of the
responsible Info-warfighters, the effect has been a "scorched earth"
policy, akin to sowing salt to render an enemy's fields infertile.
As a result, our shared semantic background environment
is becoming a more barren
"vast wasteland", riddled with dangerous contradictions that
ensnare unwary thinkers.
The semantic foreground objects capable of
germinating and growing in our semantic media -- and the "memes in
our minds" -- experience selection pressures that favor simple ideas
(and stunted emotions). These semantic seeds tend to be
encapsulated in closed ideological shells that provide comfort and
support &middot &middot &middot if we uncritically accept them,
and permit these programs to run in our cognitive matrix
without skeptical oversight.
Yet for humans to flourish as individuals
and as societies, more complex ideas, more nuanced emotions,
and (especially) more layers of uncertainty must be supported.
Mass-media reporters were
"embedded"
in the American military's
invasion of Iraq -- visibly, voluntarily, and temporarily. Nonetheless,
that battlefield environment colored their perceptions and influenced
their emotions, inevitably compromising their objectivity to some extent.
In America today, every person is
embedded
-- invisibly, involuntarily, and permanently -- in the
subtle domestic violence of Information Warfare protocols.

This unbalanced diet of disinformation long ago
compromised our personal integrity (transforming us from active citizens into
passive consumers), and degraded our ability to function collectively.
But previously, there always were safety valves -- environmental pressure-relief
frontiers to dilute our dysfunctions (often by exporting them onto other people).
But now the world is full; our populations' technological impacts
are exceeding the environment's ability to absorb them.
Democratic dysfunction may now be as dangerous as dictatorship.
(Arguably, the American public's acquiescence to the extreme
concentration of power in the Executive Branch reflects a simplistic wish
for a strong father figure to "make the trains run on time",
and to sustain "our way of life" -- a consumer fantasy
that never was sustainable.)

In the Western tradition, which attempts to define truth formally,
in (meta-)languages and symbolic logics, that quote is attributed to
Aeschylus (525 BC - 456 BC). In the East, Sun Tzu's "Art of War"
conveyed a broader and more fundamental notion of truth: &nbsp
Because meaning is not restricted to language,
non-symbolic information can be transmitted (e.g, visual "input"
via images). Thus, e.g, one can mislead or deceive an opponent by
individual "body language" or by the collective social posture of a group.
&nbsp (Western models had an unhealthy fixation on words and linear logics,
and excluded emotions and cultural situational contexts.
Such abstractions are seriously flawed, because they have removed
the most important aspects of what it means to be
a social animal -- including human cultures, cognitive biases and vulnerabilities.
This is a prime example of how a culture's conceptual tool can be pushed
beyond its limits; producing invalid results -- even to the point of absurdity --
[barwise]
yet not be recognized for decades.
Although traditional Western models of "truth" are adequate to represent
isolated instances of "perfect linguistic communication" between individuals,
newer models are needed to account for emotions, ideologies, information warfare,
and other forms of collective social change.
Fortunately, Western logicians now have a more realistic model-theoretic semantics
with which to begin addressing these issues.)

Methods of Information Warfare (Info-war) preceded language by eons.
Alfred Russel Wallace suggested to Charles Darwin that for some insects,
a protective color scheme to mimic a poisonous insect's appearance
was a survival mechanism to deter predators.
And before multi-cellular organisms, viruses evolved sophisticated
molecular mimicry tactics to penetrate defenses --
i.e, cellular membranes -- enabling them to hijack and exploit
a victim's capability for information reproduction and distribution.

In higher animals, deceptive behaviors are not even limited to mammals.
Ravens are notorious for raiding one another's food caches. Not
surprisingly, when caching food, ravens also exhibit sophisticated behaviors
that deceive other ravens about the location of their real food cache,
versus empty "dummy caches" they also visit.
[rav1]
Such behavior seems to be as "intentional" as anything else a raven does.
But does it constitute "tactical deception", or merely a hardwired
genetic pattern? An experiment placed ravens in an artificial environment,
with novel food sources. At least one raven quickly learned how to
use new items in the environment as tools of tactical deception,
indicating that ravens can use tools for Information Warfare.
[rav2]

Thus, camouflage to hide information, and
defensive and offensive schemes to transmit deceptive information
both evolved as Info-war tactics &middot &middot &middot
even without conscious human intention. &nbsp
This has profound implications for human morality and ethics.

As one example, China's "Great Leap Forward" led to a famine in which roughly
30 million people died between 1958 and 1961. Although the analysis
[sen]
by Nobel-prizewinning economist Amartya Sen does not use
the phrase, "information warfare", the central government's failure to correct its
policies was, in fact, caused by its own Info-war: &nbsp Censorship of the press meant
that "negative news" regarding any particular local famine would not be
broadcast. As a result, each isolated commune to suffer famine believed that
its famine was a purely local phenomenon, and therefore would be blamed on that commune.
So each commune that suffered famine had a disincentive
to report its "negative news" to the central government --
and without accurate local reports, the central government
had no reason to believe that famine was deep and widespread.
(This follows a historical pattern of several thousand years,
in which provincial leaders and generals were reluctant to tell the Emperor bad news.)
This local "policy response" was a predictable game-theoretic consequence, producing a
de-facto Info-war regime of 2-way "negative news" censorship. Hence the central
government's "defensive" Info-war tactic -- by distorting one information flow
(perhaps partly to prevent panic and hoarding) -- unintentionally
escalated
[esc]
that Info-war into a social trap; a collective catastrophe.

Modern American governance exhibits a similar pattern, via a disincentive
for political candidates to report "negative news" to their bosses -- the electorate.
This is a key factor in American democratic dysfunction, since any candidate
who tells hard truths about reality-based limits,
risks losing the election to another candidate who tells comforting lies.
The electoral payoff matrix in this Prisoners Dilemma reflects the polling data --
voters have been so bombarded by distorted "facts", and by denials of risks,
that Info-war has obliterated the boundary between reality and self-indulgent fantasy:
&nbsp Many will vote for whichever candidate tells them what they want to hear.
Campaign consultants and media pundits remind politicians
about the fate of Walter Mondale, whose loss they attribute to his (true) statement,
"Both of us will raise your taxes. The difference is, he won't tell you,
but I just did."
(America's current social trap reflects not merely
the ascendence of Info-war tacticians in short-term political campaigns,
but also the pathological long-term effects of Info-war strategists, working
via think-tanks to control the ideological terrain of Political Economy.)

Please Note: &nbsp By now, my usage of "Information Warfare"
has probably expanded beyond your implicit definition of that term.
So let me state, explicitly, that I apply Info-war perspectives
not only to overt conflicts, in which Friend and Foe clearly identify
their opposing roles, but also -- and especially -- to more complex,
dynamic, and ambiguous situations that have elements of
conflict, competition, collaboration, and cooperation.
Moreover, some participants may not recognize these complexities,
or may not know that some participants have different assumptions
about the roles and "rules".
Also very important, is that a participant's behavior may be
interpreted as playing a role that this participant did
not intend to play, does not want to play,
and/or does not realize he is playing.

Our "unconscious" abilities to
truthfully/accurately communicate and interpret information
about our "non-cognitive" emotional postures and processes,
likewise have profound implications for human morality and ethics.
Opinions will differ on exactly what constitutes "conscious intention"
when signaling others. For example, domestic cats
can raise all their hair follicles to appear larger and more formidable.
Conversely, when faced with overwhelming force (e.g, when
cornered by an angry human),
it is not uncommon for a cat to purr, presumably in order to
"manipulate" the opponent into a less threatening emotional state.
(A whole-systems approach -- "mutual modulation" rather than
"manipulation" -- offers a perspective less laden with intentionality.)
In any case, various abilities to signal to others
comprise the low-level protocols from which emergent behaviors
of Info-war conflict and cooperation arise.

One can view every cultural context of Political Economy
as the result of implicit Social Contracts;
the balance of power among various interests determines
the contracts' specific terms.
These Social Contracts serve to define the
acceptable "Rules of Engagement", by which we conduct our Info-wars.
The chronic "low-intensity conflict" of economic competition,
and the acute flare-ups of periodic political campaigns,
both comprise highly-evolved protocols of Information Warfare
&middot &middot &middot socially-acceptable forms of Civil War.
(These forms can remain fairly stable;
the protocols may change gradually by
covertly-coercive evolution,
reflecting gradual shifts in power balances, or
suddenly, by overtly-violent revolutions.)

In the broadest sense,
hermeneutics
inquires how, and to what extent, understanding and communication
of meaning are even possible. Specifically, how (and to what extent)
can an individual -- as the "product of a culture",
who is situated in a particular historical period -- understand
any person, any culture, or any historical period?
For my narrowly-focused, pragmatic purposes,
an adequate, reductionist "answer" to those weighty questions
is that an individual "thinks with" various conceptual tools,
the vast majority of which are shared cultural software.
The process of initially "loading" (learning) a tool,
of "updating" that tool, and the using of that tool,
all change the individual. The human mind is an interpreter
of self-modifying code, and any code it runs, will modify
that mind (to greater or lesser extents).

We come full circle to the claim that we are used by our tools.
Seldom do we notice an "equal and opposite reaction".
Most often we are not even aware that we are changed.
Many changes are subtle or small, yet may produce large cumulative results.
(For example, many repetitions of a behavior may cause an unbreakable habit.)
Even when we ingest "mind-altering substances" that profoundly
change our physical behavior, we may remain unaware of our
altered state. The drunk person who insists their driving
is not impaired is an obvious example. A more subtle example
involves "surfing" the internet: As we all have discovered,
one interesting link leads to another, and unless we maintain
mental discipline, we may find hours have passed &middot &middot &middot
hours in which we were diverted from our original internet goal.
By not closely monitoring our own mental state, we allowed
our original intent to be "hijacked" by pleasurable distractions.
(In retrospect, perhaps that diversion was worth our time; perhaps not.)

Cognitive Frameworks -- conceptual frames of reference
for interpreting the world -- are
the most important type of shared cultural software.

The individual human mind is an interpreter of conceptual code,
and as social beings, individuals often allow others to modify that code --
our conceptual tools are shareable within our (sub-)culture.
Interacting within a culture, we often, by default, "open our minds" and allow
our conceptual code to be "automatically" revised and updated:
For example, you already
"know" that similar poles of magnets repel each other, whereas opposite poles
of magnets attract each other. But -- in that conceptual context --
when I ask you
how it's possible for the "north" pole of your compass needle to be attracted
toward earth's "North" magnetic pole, you may realize that you have harbored
a longstanding contradiction: The end of your compass needle that you may
have called its "north" end, must actually have a South magnetic polarity.
There &middot &middot &middot did you feel any conceptual changes?
Did the world shift beneath your feet, ever so slightly? Is your conceptual world shaken,
when you learn that occasionally, the Earth experiences magnetic flux reversals?
That is, Earth's "North" magnetic pole
(which, for thousands of years, has meandered near our "North" rotational pole)
reverses direction, to point near Earth's "South" rotational pole.
Ok, maybe you don't feel conceptually moved by Earth's magnetic flux reversals.
But it's a historical fact that the conceptual worldviews
of virtually every Westerner were profoundly changed by the
Copernican Revolution -- a "paradigm shift", or major revision in the
culture's interpretive context --
that replaced the concept of Earth as the center of a revolving universe,
with a concept of Earth as revolving around the Sun.

Referential integrity is the correctness of a link between
the syntactic form of a term (or the appearance of an object),
and its semantic function; its meaning (its substance or significance).
This link occurs in two stages: To determine the meaning of a thing,
first we must use the appropriate "conceptual dictionary" or "reference book"
(frame of reference), and second,
we must match that thing to an appropriate item in the dictionary.
This is analogous to the integrity of a translation.

When visiting a foreign country, if you translate
foreign words the wrong way -- mistakenly transposing the meanings of "North" and
"South" -- then your body will travel the wrong way, to an unintended destination.
(Being unfamiliar with the territory, you may not even recognize that your
body has arrived at the wrong destination.)
Obviously, if another person intentionally mis-translates those two
words when communicating with you, then their action constitutes an attack
on your semantic integrity -- they inserted Malicious Code into your mind.
But what if the other person unintentionally mis-translates
those two words, and you accept that "poisoned translation", and load that
"conceptual update" into your interpretive framework?
How can you know -- in advance -- which conceptual code revisions to accept,
and which to reject as flawed, unsafe, or so malicious that to interpret them
would risk hijacking your thought process?
By semantic substitutions to your conceptual cultural software,
a clever attacker can cause your mind to arrive at a wrong conclusion,
and being unfamiliar with the conceptual territory, you may not even be
aware that your thoughts, intentions, and/or emotions have been hijacked.

In George Orwell's Nineteen Eighty-Four, the Party slogan begins,
"War is peace. Freedom is slavery."
Although Orwell's dystopian notions of Double-think, and the Ministry of Truth,
convey a chilling and all-powerful regime of thought-control, the Party slogan,
itself, reveals that thought-control stopped at the halfway point: The ultimate goal
would be a statement, "Freedom is freedom," where Orwell's people
retain the same positive emotional connotations (feelings)
about the word, "freedom", but they interpret the denotational meaning
of "freedom" to be what we think of when we say "slavery."

Orwell wrote fiction. In the real world, it would be impossible for two opposite
concepts, such as North and South, or Freedom and Slavery, to completely transpose
their meanings &middot &middot &middot Right? Wrong. In fact, at least one
extremely important case has occurred with words we commonly use
(and are used by).
Yet probably less than 1 in 100 college graduates is even aware of this fact.
Those words are subjective and objective:
Their meanings have been switched 180 degrees, to the exact opposites
of what they formerly meant. This semantic switch did not require a
huge, ominous Ministry of Truth to accomplish. Rather, it was the work
of a single man -- Immanuel Kant (1724 - 1804 AD).

The Copernican revolution was a major conceptual restructuring,
a "quake in the culture's interpretive context". People applied this
major update to their cultural software because Copernicus convinced them
that the apparent motion of the Sun is better understood as the
actual motion of the observer (on an Earth that revolves around the Sun).
That is, part of the Sun's appearance is not really a property of the Sun,
but rather is a distortion, induced by
interpretive bias in our own frame of reference.

But Kant's semantic reversal had
greater cognitive, emotional, and social impact than the Copernican revolution.
An article in "The National Catholic Register" (Jan-Feb. 1988),
"The Pillars of Unbelief - Kant" describes how,
"The simple citizens of his native Konigsburg, ...
nicknamed Kant 'The Destroyer' and named their dogs after him."
Indeed, whereas Copernicus had merely changed the human observer's
interpretive framework from stationary to moving,
Kant thoroughly transformed the entire culture's context by turning
the human observer's interpretive framework inside-out.
Because of Kant, what formerly had been "a physical object of inner experience"
instead became "an inner experience of a physical object". &nbsp
(If the "surface" of my mind is flexible, "allowing impressions to sink in",
a physical object might leave its exact impression. But if my mind is rigid,
its unyielding interpretive framework will distort the physical object's appearance,
warping the facts to fit my pre-conceptions,
like forcing a liquid to conform to the shape of its container.) &nbsp
We cannot perceive any physical object directly, "as it really is";
we can perceive only our own inner experience of it;
only our own refracted interpretations.
The "impression in my mind" made by a physical object,
depends -- to some unknown extent -- on the properties of my mind,
not merely the properties of the physical object.
Before Kant, the West used Descartes' conceptual reference frame,
a metaphysical dualism that posited an inner mind,
an outer world, and a distinct boundary between the two.
But Kant twisted that "reality" like a metaphysical
Moebius strip;
the presumably well-behaved wall dividing mind from world,
was actually a surface like an epistemological
Klein bottle,
with no clear distinction between inside and outside.
It was impossible to know, with certainty, where
"interpretation" ended, and "reality" began.
&nbsp
Kant's conceptual update spread throughout Western culture
&nbsp &middot &middot &middot &nbsp
"The Destroyer" had turned the human mind, itself, inside-out.

In searching the web for references documenting the semantic reversal of
subjective and objective,
I found only fairly dense philosophical articles, written for other philosophers
who already know about that semantic reversal. So for clear documentation,
I offer the extended quote below, taken from a hardcopy book,
"Dictionary of Philosophy and Religion: Eastern and Western Thought" --
by William L. Reese (1980; Humanities Press):

object: &nbsp The meaning has been involved in a remarkable shift.
The ordinary current meaning of "object" is that which exists in its own right,
as a thing outside the mind. But initially, that was the meaning of the word,
"subject". &nbsp
[Initially, with Aristotle, things in the world were "subjects", which could
do actions (verbs), and have various predicates (adjectives) ascribed to them.] &nbsp
&nbsp &nbsp &middot &middot &middot &nbsp In the Middle Ages
[which still used Aristotle's conceptual frameworks]
&middot &middot &middot "objective" meant an "object of thought".
Thus an "object" was &middot &middot &middot within the mind, while
&middot &middot &middot a thing in the world was considered to be "subjective".
Descartes continues this usage &nbsp &middot &middot &middot &nbsp
The reversal of these meanings came with Kant &middot &middot &middot
Meinong and Husserl attempt a return to the original meaning
&middot &middot &middot
subject: &nbsp As in the case of "object", the term "subject"
has suffered a dramatic reversal in meaning. &nbsp &middot &middot &middot
&nbsp Kant marks the point where the reversal in meaning occurred.
Holding that "the thing which thinks" is nothing more than a transcendental subject
known only "through the thoughts that are its predicates", it is clear that
the center of awareness becomes the subject, and the object is forced outside
the realm of subjective awareness. &nbsp This is the meaning of "subject"
which has come into general use in the modern world.
&middot &middot &middot

Having flipped the world on its (magnetic) axis, stopped the universe from
revolving around the Earth, by setting the Earth in motion,
and then turned the entire universe inside-out,
it should be a simple matter -- with a little semantic spin --
to turn Freedom into Slavery, and to that minor task we now proceed.
&middot &middot &middot

Informally, I define an "Ideology" as a closed mental framework;
facts or ideas that are inconsistent with that framework
are denied, ignored, or distorted until they fit.
(In science, we call this approach, "Drawing your curves,
then plotting your data".)
In practice, Religion and Science are employed by some as
open mental frameworks, and deployed by others as (closed) ideologies.
A framework may be useful to describe and analyze what is (or what might be).
But like every abstraction tool, every framework has limits.
Any framework may be dangerous when pushed beyond its limits,
and transformed into a closed ideology, i.e, abused to
prescribe how reality must be,
[newt]
how one should behave, or
what ought to be, i.e, ethics.

Environmentalists and social-justice activists sometimes blame "economics"
for many problems. But it might be equally accurate to blame "democracy"
or "freedom":

"[ Regarding relations ] Between the strong and the weak,
it is freedom which oppresses and law which liberates."
&nbsp &nbsp &nbsp &nbsp - Jean Baptiste Lacordaire &nbsp
("Global Communication and International Relations" -- Howard Frederick, p. 244)

The extent to which "economics" is the cause of a problem
(vs. a symptom that channels deeper problems) depends on
exactly what we mean by "economics". Like many words,
"economics" is so overloaded with different meanings -- and with
hidden ideological connotations -- that using it carelessly,
i.e, mentally interpreting this term without cognitive safeguards,
has become hazardous to our conceptual health.
Yet to reject this term, to mentally boycott all meanings of "economics",
is to discard some extremely valuable tools and insights for dealing
with (and for changing) the real world.
The chemical reactions that power photosynthesis in plants,
and the metabolism of our own bodies, cannot be understood
without some concepts of "Energy Economics". Indeed,
Resource Economists, Anthropologists, and Political Scientists
have learned much about early human societies and ways of life
by examining "calorie budgets". At our current juncture, to assess
alternative future energy technology infrastructures -- the
feasibility and efficiency of their resource-inputs and greenhouse-emission-outputs,
and the equitable distribution of their costs, risks, and benefits --
is fundamentally a problem for "economic" techniques.

Political Economy of Markets

Every economy exists in some context.
It must be defined and governed by some Laws -- either the laws of physics,
chemistry, and biology, or the laws of politics. There is no such thing
as a "Market Economy" that is "free" of politics.
Suppose there is no government, and no police. You and I meet in a
"State of Nature". I have a knife and some food. You have a sword,
but lack food. It is apparent that you desire my food, and you are willing
to fight for food, despite the fact that you risk
injury from my knife. OTOH, if I refuse to give you any food, it seems
I face a greater risk of injury from your sword. So we bargain:
How much of my food am I willing to supply, to ensure my safety?
How little of my food will you demand, to balance your risk from hunger
with your risk from combat?
In this "State of Nature", our contract
is negotiated under the threat of violence by the parties to the contract.
In a Political State, typically the threat of violence is reserved for
the State, acting on our behalf (in return for taxes). The specific types of
contract terms and bargaining procedures allowed by the State,
and the State's methods of deterrence, monitoring, and enforcement,
determine the context for our market.
Thus, for conceptual clarity (and to reduce ideological obfuscation),
any Market Economy should more accurately be called a Political Economy.

It is important to understand the limits of markets in various political
economies -- e.g, concepts of market failure, robustness, unrealistic notions
of "perfect information", etc. Absolutely imperative is an understanding of market
externalities: &nbsp A market transaction often involves not only
the exchange of goods and services between buyer and seller, but also the
dumping of bads and dis-services onto innocent bystanders.
But the limits of markets would be a long digression, and others
have greater expertise in those areas. My purpose on this webpage is to explore
the economics of Info-war, and the roles of Info-wars in (re-)constituting
a political economy. The failure of law or culture to limit the exercise of power via
domestic information warfare means that politics has become meta-economics,
that is, politics largely has become an arena for contesting
which rules will determine a specific context of political economy.
And information warfare is a "cost-effective" weapon used by many interests in that arena. (As an obvious example, a company's "Return On Investment" from lobbying
Congress for special subsidies or tax breaks often is several hundred percent.
Less apparent -- at least to
Americans -- are Info-war methods to shape international trade agreements and
intellectual property regimes, and where advantageous, to make the U.S. government
an international "enforcer" of those coerced "agreements".)

"The twentieth century has been characterized by three
developments of great political importance: the growth of democracy,
the growth of corporate power, and the growth of corporate propaganda
as a means of protecting corporate power against democracy."

- "Taking the Risk Out of Democracy: Propaganda in the U.S. and Australia",
by Alex Carey, 1995, Univ. of New South Wales Press.
&nbsp (See this
informative and critical review
of the 1997 Univ. of Illinois Press edition, which carries a different subtitle:
"Corporate Propaganda Versus Freedom and Liberty".)

Bur propaganda is only one family of Info-war techniques.
What specific types of Info-war does the State allow in bargaining procedures?
For economic transactions, what laws govern information-hiding, bluffing,
misrepresentation, and other forms of deception? How are information-subsidies
and information-censorship regulated? What kinds of emotional manipulation techniques
are allowed? Bait and switch? Sexual imagery?
For political transactions, are the laws similar? What, if any, enforcement
mechanisms exist to hold a politician accountable for claims or promises made during
a campaign, or while in public office?

"If you do not have information to begin with, or know what new
information could be assembled, initial inferiority is bound to
be sharpened and perpetuated. This UNEQUAL BARGAINING POSITION
will affect all relations whether labeled aid, trade, investment,
transfer of technology, technical assistance, or any other."

Returning to notions of ideology, what interests benefit when large population
segments internalize a certain ideology of economic life? Did those interests
intentionally engage in Info-war to achieve that result? If not intentional,
are they nevertheless responsible for providing "economic literacy" programs
to relieve the public of their misconceptions? Or should any "economic literacy"
program sponsored by such interests be presumed self-serving propaganda?
If such interests cannot be trusted, nor can the government they influence,
then who will "de-program" the public of its cultish Economic Darwinism?
Although a staunch anti-communist, Pope John Paul II approved the
publication of "A Catholic Framework for Economic Life",
a clear attempt to counter the perceived-excesses of
the prevailing "free market" economic ideology.
For example, its first ethical principle is:

&nbsp &nbsp &nbsp &nbsp &nbsp &nbsp &nbsp
"The economy exists for the person, not the person for the economy."

For those offended by my Papal quote, I'm aware of the efforts by
Pope John Paul II and then-Cardinal Ratzinger to
purge the Roman Catholic church of liberation theology.
So to restore some ideological balance, I offer the following quote by
Dom Helder Camara, archbishop of Brazil and Nobel Peace Prize nominee:

That quote well illustrates how powerful elites encourage us to direct our
emotions and our efforts toward alleviating high-visibility individual symptoms,
but discourage us from addressing the deeper, hidden, structural causes.

Political Economy of Technological "Contracts"

Technology in America functions as both a descriptive framework and
a prescriptive ideology. For a creator of technology,
the prevailing ideology seems to be that,
"If I can build it, then I should"
(so long as it's economically viable for me).
And, "If I build it, they should come and buy it.
If they don't buy, then I just need a more persuasive (or manipulative)
marketing campaign to convince them my technology is good for them.
Under no circumstances should I respectfully ask users what they think is
good for them. That is, do not engage in user-centered design,
or participatory design. If they buy it, then it must have been good for them.
I enjoy the freedom to unleash any technology;
but I do not bear any responsibility for the consequences."
&nbsp For a consumer of technology, the corresponding ideology seems to be,
"If a technology is advertised as new, convenient, or powerful,
then certainly it will benefit my lifestyle.
If I don't learn to enjoy it immediately,
at least I can enjoy knowing that my lifestyle appears more 'cool'
to other people. Under no circumstances should I consider whether
this technology will help me live a more substantive life.
I enjoy the freedom to use any technology;
but I do not bear any responsibility to learn about
the externalized consequences I impose on other people."

We should ask, "Whoseends seem to justify the
technological means we are offered?"
In examining the desirability of those ends, we must become
more aware that almost every technology is a means that
changes our perceptions; hence it may change our ends.

If technology is the engine of the future, who is in the driver's seat?
When asked by pollsters, many people agree that,
"Technology has almost gotten out of control." But the polls never
probe further by asking, "Almost gotten out of whose control?"
The increasing concentration of information technology power in the hands
of unaccountable institutions (both government agencies and
private corporations) -- coupled with the profound info-tech illiteracy
of the public (largely due to our Specialization of Labor) --
threatens democratic control.
Democracy is dysfunctional &middot &middot &middot especially in America.
If we are to regain some control over our future, we need to begin
asking questions: What forces are steering information technology
in which directions, and for what (often short-sighted or unintended) purposes?

Every technology entails both costs and benefits. But because most
technologies are advertised and sold as commodity products, their
benefits need to be highly visible and concentrated in the hands of
the purchaser. On the debit side, technologies that succeed in the
marketplace tend to have costs that are relatively invisible.
These costs may be shifted into the future (e.g, cancer caused
by chemical exposures), and/or the burdens may be
dispersed or "externalized" onto other people.
For any technology, will those who enjoy its benefits,
bear their fair share of its costs and risks? Did all those
impacted by a technology give their fully-informed consent
beforehand? Were they allowed to "vote" on alternative design,
implementation, and deployment/distribution aspects of that technology?

"In the technical realm, we repeatedly enter into a series of
social contracts, the terms of which are revealed only after the
signing."
&nbsp &nbsp &nbsp &nbsp &nbsp &nbsp &nbsp
&nbsp &nbsp &nbsp &nbsp &nbsp &nbsp &nbsp
-- Langdon Winner, The Whale and the Reactor, 1986

To require something akin to an "Environmental Impact Statement"
for every proposed new technology (or combination of old technologies)
seems an immense obstacle to "innovation". Yet the lack of sufficient
Precautionary Principles embedded in our technological life-cycles --
coupled with our illiteracy, unconcern, and failure to monitor
the ongoing impacts of our technologies post-deployment --
has produced a social trajectory that threatens the entire planet
with rapid, non-linear climate change.
What sort of checks and balances should we (who?) insert to govern
our society's co-evolution with technology? How can more of the public
participate in the "eternal vigilance", that is the price of preserving
human freedoms in an artificial world we (collectively) have constructed?

Regarding political campaigns and eternal vigilance,
it might be a good idea (in a democracy) for citizens
to ensure that votes are accurately counted.
But what happens when the mainstream media, elections officials,
and the manufacturers of electronic voting machines are clueless about
computer security facts, and instead rely on faith-based security?
My
Electronic Voting page documents my attempts to
educate technologically-illiterate Americans about the facts,
in a highly polarized political climate.

Power Relations and Information Warfare
was written to teach students in my Information Ethics classes
how information is actually used in America and
the globalized modern economy, and more importantly, how
information is abused.
It teaches some Info-war history (e.g, how the pre-Civil War U.S. controlled
its slaves by prohibiting anyone from teaching slaves to read or write),
and provides a gentle introduction to the Rules of Engagement
that govern the conduct of modern, "civilized" political and economic
Information Warfare.

Computer-assisted Crises
was written for the intelligent layperson, who should be more skeptical about
blindly worshiping any and every computer technology.
It's my chapter in:

The Introduction offers a conceptual framework that helps clarify our
perspectives on technology. Societies and individuals co-evolve
with technologies: &nbsp Our minds are
embedded
in various technological environments, and those technologies are embedded
in our minds. Thus, technologies influence our emotions and actions
in ways more complex and specific than the simplistic overgeneralizations
popularized by a prominent publicity-seeking media theorist of the 1960's.
Technology-assisted capabilities
become "habits of mind": compiled, compressed algorithms of
instrumental rationality. In any situation, we naturally tend
to reach for the available tools; thus, we learn to perceive situations in terms
of those tools. As one wag remarked, "When the only tool you have is a hammer,
every problem looks like a nail." In conjunction with any technology
(whether a physical or a mental tool), our bodies become "differently abled",
as do our mental abilities to perceive and conceptualize. Because ethics,
emotion, and cognition cannot be completely separated, the passive worship,
by Postmodernist elites, of any and every technology imposed on society
is self-indulgent and irresponsible.
In contrast to that obsequious deference to power, Phil Bereano highlighted
one important implication of the differential effects of technological
specialization of labor and consumption:
"Only the naive or the scurrilous believe that &nbsp 'information is power'.
&nbsp Power is power, and information is particularly useful
to those who are already powerful."
Although Neil Postman's writings provide clarity and insight on the effects of
media technologies, Postman seemed either complacent, or reluctant, to address
the causes of those effects -- i.e, techno-assisted power relations.
Some academics may denigrate Jerry Mander for being too practical,
but that may be because their lifelong specialization ("knowing more and more
about less and less") has destroyed their common sense.
I respect Jerry's intelligence, and I applaud his courage
in applying his ethics to change the real world.

Shaped by our Technology
is a case study in comparative social pathology.
It examines the historical record of how Japan
developed firearms technology, produced far more guns
than Europe, and deployed guns in battle for nearly 100 years.
Then, recognizing the negative impacts of guns on their society,
Japan was able to isolate that particular socially-disruptive technology,
and to "put that genie back in the bottle" for over 200 years
&middot &middot &middot
while Japanese civilization continued to advance via other technologies.
Japan's ability to reject guns in favor of swords
led to radically differing outcomes in 17th Century Japan vs. Germany.

This juxtaposition of historical threads -- contrasting the
historical fact of Japan's ascent into a "golden age"
with Europe's descent into cannibalism --
suggests the high stakes we face today as we consider various
technological choices in the context of global warming,
and the knowledge that technologically-civilized
(but ethically-primitive) societies such as America
already may bear responsibility for irreversibleclimate holocausts
in "newly-submerging democracies" such as Bangladesh.

Hopefully everyone starts to question whether techno-change
always equals techno-progress, and whether that equals social progress.
(This used to be called "Critical Thinking", before it became
a lost art entirely ;-)
(The
Redefining Progress
org seems to focus narrowly on ecological economics,
rather than on remedying the core ethical problem that
Americans confuse
Quality of Life
with the "freedom" to consume a vast Quantity of Stuff.)
This essay also raises issues about the extent to which
technologies (and their limits) can be "democratically" chosen
via cultural maturity/discipline, vs. imposed by power relations --
where some social or economic groups control others by outright
coercion, or by more subtle manipulation of information or emotions.

(During this period, the "Luddites" whose interests
were threatened by a new technology were the warrior-castes.
As the essay describes, these Warrior-Luddites won in Japan,
but lost in Europe. That's partly because Japan's samurai class
had a greater power advantage over Japan's peasants than
Europe's knights had over their serfs.
Although I don't cover them, there are interesting parallels with the
U.S. in the early 1900s, in which various power relations were exercised
to destroy urban mass-transit systems, narrow-gauge railways, and
a thriving solar energy industry, and replace them with fossil-fueled
technologies. Invariably, history is written by the winners.)

My early peer-reviewed research on Computer Security was fairly esoteric.
(In order to get published in a "scientific" journal, that's pretty much
a requirement.) But prompted by the dictatorial excesses of the
Bush-Cheney regime (which, not surprisingly, were welcomed by most
Americans, because they defined "the good life" as
the Freedom to Shop Without Fear),
I tried to open up yet another new disciplinary field --
Sanitization (also known as Information Disclosure Control)
-- would allow policymakers to detect whether
information truly necessary for "Homeland Security"
infringed on Civil Liberties or Privacy,
and if so, exactly how policymakers could
make intelligent decisions
about balancing Security with Civil Liberties and Privacy.
(I'm still waiting for Dick Cheney's phone call :-)

Sanitization Models and their Limitations
summarizes my several years of wrestling with these problems.
Although it's a peer-reviewed publication, many parts of it
can be understood by the intelligent layperson.
Here is the Abstract for the paper:

Abstract:
This work explores issues of computational disclosure control.
We examine assumptions in the foundations of traditional problem statements
and abstract models. We offer a comprehensive framework, based on
the notion of an inference game, that unifies various inference problems
by parameterizing their problem spaces. This work raises questions
regarding the significance of intractability results.
We analyze common structural aspects of inference problems via case studies;
these emphasize why explicit policies are needed to specify
all social context and ethical values relevant to a problem instance.

Because the Answers that Science gets depend on the specific Questions that Science asks,
and too many scientists tend to ask only the questions we are paid to ask,
Science -- in practice -- largely has been reduced to
a sub-field of Political Economy.
The following quote (from Paul Goodman's "New Reformation", 1970)
clarifies the moral status of "applied" Technology:

&nbsp &nbsp &nbsp &nbsp &nbsp &nbsp &nbsp
"Whether or not it draws on new scientific research,
technology is a branch of moral philosophy, not of science."

but it fails to recognize that "basic" Science also
may be mired in a morally corrupt context.

One blatant situation illustrating "How to Lie with Science" involves studies of
prospective drugs, funded by large pharmaceutical companies. Often, experiments are
carefully designed to ask only very narrow questions that are likely to yield "good"
answers (i.e, good for Big Pharma's profits, but not good for the health of people
who take those drugs). A far more blatant example of
computer-assisted intellectual dishonesty
is covered in my "Invisible Crises" chapter, in a section on "Computer Simulations".

Thus, it is a legitimate question to ask the extent to which climate scientists
have biased their results, either intentionally or unconsciously. Are these
scientists engaging in unwarranted alarmism, merely to ensure they get more
research funding? Or, has humanity indeed created the greatest crisis in its existence? Unfortunately, among the results of empirical studies and computer models indicating
that climate chaos and disruption is a clear and present danger,
I'm aware of only 2 studies (not yet corrected, as of 3/08) whose assumptions
hid unwarranted pessimistic bias. In contrast, there is a significant literature
on climate change (primarily in public policy and economics fields) whose results
border on the Pollyanna-ish. Unduly optimistic assumptions about "technological fixes"
abound, as do economic assumptions that justify maintaining the status quo
(or justifying only those changes which benefit those who already benefit
disproportionately).

Given the likelihood of
catastrophic consequences, in the early 1990ís,
I considered the possibility that Info-war climate-change denial campaigns
might constitute a crime against humanity. Recently, this issue got a much
higher profile, when Dr. James Hansen, director of NASAís Goddard Institute
of Space Sciences, went on record
calling for prosecution of fossil fuel company CEOs for their climate-change
denial/disinformation campaigns. (Apparently Rush Limbaugh responded.)
In his 6/23/08
written testimony to Congress, Dr. Hansen outlined the case (the italics are mine):

CEOs of fossil energy companies know what they are doing and are aware of
long-term consequences of continued business as usual. In my opinion, these CEOs
should be tried for high crimes against humanity and nature."

Below, I examine several essential questions related to prosecuting
Climate Crimes Against Humanity -- including alleged disinformation and propaganda campaigns
about climate change risks. Before doing so, I digress briefly,
to introduce an important family of conceptual tools.

One extremely useful method of practical ethics is to reason by analogies.
In doing so, we can gain insight by borrowing the tools of practical math and
engineering known as "perturbation analysis" and "successive approximation".
Analogy: Russian Roulette -- shooting CO2 bullets vertically into the air over a city.
How many probabilistic bullets? What is the risk of a falling bullet causing harm?
Perturb the situation -- what if the shooter knows the gun is pointed at
a specific target? E.g, low-lying regions such as Bangladesh and Burma.
Analogy: Driving on a highway, you see what appears to be a helpless person
lying in your lane, far ahead. What is your responsibility to apply the brakes,
and when? How does that responsibility increase, as you approach the
object, and your vision becomes clearer, while the risk increases?
Perturb the situation -- what if you are one of many passengers in a bus,
and the bus driver (i.e, your nation's political "leaders")
seems unaware of the object in the road ahead, or unconcerned?
Analogy: You are a passenger on a large cruise ship, the "Exxon Titanic".
Sonar readings indicate a large iceberg dead ahead, but only the tip
of the iceberg has been confirmed visually. It takes a long time
to turn the Titanic's heading more than 1 degree from its present course.
At what point does it become a moral imperative, for the passengers
to storm the cockpit, and force the drunken Captain Hazelwood
(i.e, those with political and economic power) to radically change course?
Analogy: A climate-scientist shouts "Fire" in a crowded theatre.
Perhaps there was a brief spark, but certainly there is no fire.
The crowd stampedes, killing many. What responsibility does the scientist
bear for raising the alarm? Did the scientist's perception of a spark
cross an evidentiary threshhold, and constitute probable cause to justify
raising an alarm?
Analogy: A climate-scientist shouts "Fire" in a crowded theatre.
The theatre manager shouts him down, telling the crowd,
"False alarm folks; sit back and enjoy the entertainment.
Buy more popcorn and cigarettes! Feel free to smoke in the theatre!
Smoke all you want ... it's fire-proof."
But the climate-scientist's warning was correct: A fire destroys most of the theatre.
Those in the first-class seats are able to escape via a custom fire-door
in their section, but everyone else in the theatre is either killed or injured.
What responsibility does the theatre manager bear for downplaying the alarm?
What responsibility do those in the first-class seats bear for continuing to smoke,
and for choosing entertainment, yet neglecting to investigate the climate-scientist's warning?
TOPICS (under construction):
Propaganda as a Crime Against Humanity --
-- The death penalty for Julius Streicher at Nuremberg.
-- Similar cases in more recent genocides in Bosnia and Rwanda.
-- One issue: What is legitimate scientific dissent from prevailing consensus,
vs. what constitutes responsibility and complicity in an intentional
corporate propaganda campaign to deny the risks of climate disruption?
Deterrent value of prosecuting climate-change disinformation campaigns as a Crime Against Humanity:
"Collapse" -- by Jared Diamond ().
reviews history of societies that have become extinct. Often, the elites
of those societies opposed adaptive changes that might have saved their societies,
because those changes threatened the elites' short-term power and privileges.
So argument #1 for deterrence is that prosecution threatens elites even more.
A second argument for deterrent value is that some current elites may feel they
can make "a separate peace" with climate change, via a privileged and private
"adaptation" -- fortified compounds in diverse locations, private air travel
among those sanctuaries, a private army, stockpiled supplies, etc.
(The military dictatorship in Burma demonstrates that a social order -- literally
sacrificial in its brutality -- unfortunately is stable over several generations.)
Cases of "privileged adaptation" are ubiquituous in human societies,
e.g, in the U.S., the "forting-up" of the wealthy into gated "communities".
"Shock Doctrine: ??? Disaster Capitalism" -- by Naomi Klein
Criminal justice ideal: the "punishment should fit the crime", in three ways.
(1) Deterrence - the "punishment" is publicized; made visible,
to help prevent similar future crimes.
(2) Retribution - satisfies victims' desire for revenge,
thereby helping alleviate socially-dangerous pressures (vigilante or mob violence).
(3) Rehabilitation - experiencing the "punishment" hopefully
prepares the perpetrator to productively re-enter society.
Varying mixtures of items (2) and (3) might be considered to comprise Reconciliation.
The most prominent case of Reconciliation as an explicit goal
is South Africa's Truth and Reconciliation Commission (TRC).
As an alternative to World War II -style Nuremberg Trials,
South Africa's TRC approach was relatively immune to accusations of victor's justice.
The most widely felt (and largely valid) criticism of South Africa's TRC approach is that
justice is a prerequisite for reconciliation; reconciliation is not an alternative to justice.
Seeking Justice: What is appropriate punishment for Climate Crimes Against Humanity?
Is that question even relevant? Using a "presumed innocent until proven guilty"
standard, we would defer any punishment until Earth's climate had collapsed,
so there could be no deterrent value. But, even given the risks of climate change,
how ethical is it to "presume guilt", and impose punishment before
major disruption of climate stability has been proven?
Here's one proposal to address such issues:
The "punishment" would be administered pre-emptively, yet in direct proportion
to the (simultaneously unfolding) severity of the climate change crime.
If (somehow) humanity dodges a "climate bullet", so does the perpetrator.
To achieve the three goals of criminal justice, the proposed "punishment" is
not a mandatory death penalty, but what the perpetrator might experience as worse --
forced to experience the risks of Climate Change just like ordinary people.
Basic idea: Incarceration in a climate-controlled environment,
whose life-support conditions -- air, food, and water -- would be controlled
to match those of (population-weighted) randomly-chosen locations around the world.
For example, if a heat wave strikes Chicago and kills many elderly people
in their stifling apartments, the perpetrator might face similar conditions,
and a roughly similar excess-risk of death. If a drought or famine menaces Nairobi,
the perpetrator might face excess-death risks from dehydration or starvation.
(Admittedly, the impacts of rising sea levels, forced migration,
violence against climate refugees, and climate-change-induced disease outbreaks
would be difficult to simulate in the perpetrator's controlled environment.
Moreover, this punishment-risk-regime does not address the "Crimes Against Nature"
aspects -- e.g, excess suffering of individual animals, and extinctions of species.)
In addition, I suggest that administering this punishment should be highly publicized.
Obviously this would enhance its deterrent value, and alleviate revenge-driven
mob violence pressures. But also, it could significantly promote reconciliation ...
Not merely the (insignificant) reconciliation between perpetrator and society,
but the far more productive and significant reconciliation between all those who benefited
disproportionately from climate-disrupting technologies, and all those who suffered
disproportionately from others using those technologies.
One of the (many) reasons for Americans' failure to perceive and recognize climate change
is that the impacts are too easily ascribed to "background noise" -- "normal" weather
variations. Another reason is that the impacts are too easily ignored -- they mostly hit
the invisible poor. (America's TV-watching public was re-united with its invisible poor
via coverage of New Orleans after Hurrican Katrina.)
To administer an appropriately-randomized (hence proportionate) punishment to a
climate-change perpetrator requires that impacts of climate change on the world's poor
are accurately monitored. I suggest that making the sufferings of the "Have-Nots"
more visible to the economic "Haves" might do wonders for re-uniting all of humanity,
and reminding us that, due to our inter-dependencies, we share a common future.
Many Americans are enamored of TV "reality shows"; here's one example of a non-televised
"Climate Change Reality Show" that, so far, only certain vulnerable "Have-Nots" can see:
Time Runs Out for Islanders on Global Warming's Front Line (3/30/08).
This article states:

According to ... the director of the School of Oceanography Studies at Kolkata's
Jadavpur University, the people of the Sundarbans are the first global-warming refugees.
"These people are victims of global warming. ... The Sundarbans
[a low-lying island-delta region] and the four million people
who inhabit the Indian side are dreadfully vulnerable."

Hannah Arendt and the banality of evil:
Comparing today's Americans with the "Good Germans"
that enabled Hitler to exterminate 6 million Jews and 5 million non-Jews:
What did they know, and when should they have known it?
Orders of magnitude: the Nazi Holocaust may be mere chump change --
lint in the corners of the petty cash drawer of human life --
compared to the annual deaths that may ensue
from an ongoing Climate Holocaust.
"The Economics of Genocide": Due to the projected drought and starvation
in the third world, this was the label chosen by the Stern report for a
global average rise in temperatures greater than 3 degrees centigrade.
Irish potato famine quote -- Can economic responsibility be so dispersed,
that nobody is responsible?
Martin Khor's article on economic value of a human life in first world vs. third world.
Political Economy of Complicity and Self-Indulgent Narcissism
Climate indulgences -- the ethics of purchasing alleged carbon offsets.
Tom Athanasiou and Eco-Equity.
Migration, and responsibility for re-settlement.
As stated succinctly by Atiq Rahman, Director of the
Bangladeshi Institute for Advanced Studies,
"We will come marching over your doorsteps with our wet feet."
["WORLD CLIMATE BODY COLONISED BY OECD ECONOMISTS" by Aubrey Meyer,
in "Third World Resurgence" No. 49, 1994]
Case Study -- Bangladesh.
"Adaptation" is a wonderful thing &middot &middot &middot when
somebody else is forced to adapt, while I continue my own
way of life unchanged. Given India's 1500-mile razor-wire-topped
border fence with Bangladesh, millions of people
in Bangladesh must "adapt" either by becoming forced-labor slaves
of the Burmese military junta, by growing gills
(the "Let them breathe water" option), or by dying.
The ethics of advocating or pursuing a "technological fix" via geo-engineering:
-- atmospheric sulfate injection.
-- genetic engineering of Pseudomonas or other bacteria to serve as
more effective condensation nucleii, or for other atmospheric modifications.
-- artificial life to sequester and/or recycle greenhouse gases.
-- Who decides? How is responsibility for unintended consequences distributed?
Also see the above historical sketch regarding the
enormous consequences, for earlier civilizations,
of failing to reject a disruptive technology.

I do what, in America, is called "Public Interest Work".
(But since nobody will pay you for trying to heal the world,
maybe we shouldn't really call it "work" :-)
This work is motivated by the radical notion that the "Free Market"
(an ideological term that incorporates a contradiction; see: &nbsp
Political Economy of Markets)
might not automatically solve all our problems,
and that the Prisoners' Dilemma is not the only situation
in which individual pursuit of greed may produce
a society that is not worth living in.
My work necessarily involves both Science, and Political Economy.
I realize that working
For the Common Good
is an old-fashioned idea, too "conservative" for
modern American culture. Thus, I've been forced to accept
the diagnosis of a society rendered insane by its prevailing
technologies and ideologies.

Some background context: In 1989, I announced that Computer Science
was too lacking in ethics, and quit the PhD program. Faculty members
were variously chagrined and insulted, but eventually we developed a
new course, Information Ethics, that I taught as long as I was able.
(The past 10 years, I've been slowly degenerating with terminal cancer.)
Thanks to the efforts of many others, similar courses are now required
for any 4-year Computer Science program to be accredited in the US.
It's too little, too late (both in terms of students' exposure during
their junior or senior year, and because this technological horse
long ago escaped the barn with no ethical halter, and has been
trampling on other social values). But a few otherwise dweebish
young techno-architects will become slightly more sensitized to
socioeconomic contexts, and having at least 1 lecturer in every
Computer Science dept who's paid to think about ethical implications
and alternatives is a significant ongoing force for institutional change.
(This, IMHO, was a more useful result than most PhDs achieve in a lifetime.)

As I noted earlier, my usage of "Information Warfare"
has probably expanded beyond your implicit definition of that term.
So let me state, explicitly, that I apply Info-war perspectives
not only to overt conflicts, in which Friend and Foe clearly identify
their opposing roles, but also -- and especially -- to more complex,
dynamic, and ambiguous situations that have elements of
conflict, competition, collaboration, and cooperation.
Moreover, some participants may not recognize these complexities,
or may not know that some participants have different assumptions
about the roles and "rules".
Also very important, is that a participant's behavior may be
interpreted as playing a role that this participant did
not intend to play, does not want to play,
and does not realize he is playing.

Let me address what might seem a significant criticism: I have failed to define
"Information Warfare" rigorously, even in the abstract, much less to provide an
operational definition that would allow us to measure and quantify real-world
prevalence and specific incidents.

My response: We are so thoroughly immersed in Info-war that it has become largely
invisible; a "normal" constituent of individual thought, interpersonal communication,
and collective governance/decisionmaking. An abstract Info-war definition may require
an agreed-upon semantic background framework that is far more conceptually intact
than what exists today in America. (We are
embeddedin our own culture's conceptual toolbox(cultural software), and our tool-use
changes those tools, thereby shifting the "semantic ground" underneath our feet.)
Regarding a useful operational definition, compare Info-war with obscenity.
Both practices treat an individual as an object of manipulation, rather than as
a subject worthy of respect. But obscenity has this somewhat-plausible definition:
&nbsp "I know it when I see it." In contrast, the problem with a similar
operational definition for Info-war is that, far too often,
"We do not know it when we see it".

One Operational Approach

Useful lessons on quantifying
Info-war in America are learned by studying ongoing legal attempts to define
"political campaigning", in order to regulate it. Various operational definitions
over time have tended to focus on a "Follow the Money" approach. These definitions
were broadened to recognize certain "in-kind" contributions, i.e, supportive products,
services, and infrastructure. But the limitations of this approach are apparent when
they begin to encroach on boundaries we consider "free speech" (as opposed to
"paid speech"). A decade ago, there were "Section 501-C-3" entities
(such designations refer to the section of the IRS code under which they operate)
that were allowed to pay for "public education programs" pertaining to "issues",
but were constrained in their freedom to "lobby" lawmakers directly, and were
prohibited from supporting any specific political candidate or party.
Today, we have "Section 527" entities whose candidate-specific Info-war campaigns
essentially are constained solely by the requirement that they be "independent"
of that candidate's "official" campaign.

But tracking "Section 527" money flows quantifies only a small region of Info-war
near certain prominent Info-warlords. What about the mainstream media "echo chamber",
where Info-war weapons often are reproduced and amplified by "opinion leaders"?
What about think-tank op-ed columnists, individual bloggers, talk-radio hosts --
and now Fox-TV "News hosts" -- who both initiate Info-war attacks, and amplify
attacks that originated from other sources?
What about the lower-level "dittoheads", and the non-ideologically-captive individuals
who, perhaps inadvertently, transmit an Info-war "meme" (and increment its
"time-to-live" persistence in the collective cultural consciousness)?
To some extent, all of us are both victims -- and instruments -- of Info-war.

Bottom-up vs. Top-down Approaches

This is a work in progress. To date, I have not even attempted to produce a definition of Info-war that is both useful in practice, and theoretically-encompassing.
My approach has been bottom-up. (In contrast, a top-down approach immediately
suffers from the conceptual biases of its starting point -- the "ideological curves"
have already been plotted, and the tendency then becomes to jam every real-world
datapoint into that framework, whether it fits or not.) But one caveat regarding
my bottom-up approach -- it suffers from a "grab bag" tendency to cite the Info-war
aspects of numerous real-world situations, without a rigorous criterion for deciding
what to include in the grab-bag. As a result, I may have included one case study
of dubious relevance, primarily because I knew about it, and excluded another
extremely relevant case study due to my own ignorance.

Since that may skew a reader's interpretation of the pattern I intended to convey,
let me risk giving some top-down interpretations as well. My project might be
viewed as an attempt to apply Kantian ethics and epistemology to the
"social construction of realities".
In that light, my contribution emphasizes that finer-grained
social and cognitive structures arise due to the specialization of labor
(and also the specialization of technological consumption),
because being
embedded
in different environments shapes our conceptual frameworks (and hence our consciousness)
in different ways. Or to expand on Thoreau, "Men tend to become different tools,
as they use -- and are shaped by -- different technologies."
(Via the bottom-up approach, my contribution also emphasizes the emergent,
higher-level dysfunctional System-Dynamics
caused by lower-level tactical Info-war processes. That is,
this bottom-up approach has now situated Info-war in a top-down context of
Evolutionary Escalation,
to examine the resulting dysfunctional evolution of our
Cultural Software.)

I've explicitly asserted that human thought is shaped by our cultural software,
and that (knowingly or unknowingly) we may be influenced by various ideologies.
Thus, it would be ignorant and arrogant for me to claim that I, myself, have no such
cognitive-emotional biases and vulnerabilities. Nevertheless, I'm virtually certain
you'll find that claim appearing implicitly throughout my writings!
(It's "natural", "only human", and to some extent inevitable. It's how we perceive and
describe the world -- it's much easier for me to see a mote or various-colored glasses on
your eyeball, than on my own eye. When I hear you speak, it's obvious that you
speak with a distinctive accent, whereas my speech is uninflected and "normal".)

I hope and intend for my (current) writings to be at least "acceptable" from a variety
of different ideological viewpoints. Yet I also write somewhat in haste, so I tend to
communicate via the conceptual tools and frameworks most readily accessible to me.
That is to say, I try to write with a respectful attitude toward several
different potential audiences, yet at some points I will surely offend and
alienate various readers, and for this I apologize in advance.

Journalists struggle with similar issues, and attempt to write "objectively".
In practice, journalists' operational definitions of "objective journalism"
are a product of their respective
"embeddings" --
in a profession, a company, a topic (e.g, a financial, military,
or political "beat"), and an incentive-based environment.
(Those incentives may be direct economic ones, or via the more indirect and subtle
currency of access to particular sources of information.)
Rather than strive for the impossible ideal of writing "objectively", journalists
find it far easier to pursue the goal of "balance". This proceeds,
in a simplistic and efficient manner, by identifying the alleged
"Two sides to every story", and giving "appropriately equal" weight or validity
to each of the 2 sides. (In practice, the weights are proportionate to
the influences that journalist experiences due to their embedding.)
This is why, in America, "balanced journalism" tends to exhibit an Ideology
of "extremist centrism" -- by "pissing off both sides equally", the
journalist has "proved their objectivity" ... or at least provided a
justification that will "cover their a__". &nbsp
Besides the extreme central-tendency bias and the
bias caused by embedding, this approach causes the
reductionist two-dimensional spectrum bias -- every issue is
forced to conform to a contingent (usually arbitrary) two-dimensional framework
determined by the current cultural and historical situation.
(Whereas intellectual honesty requires us to seek
at least three sides to every story.)
NOTE: The reductionist two-dimensional spectrum bias also is
easily manipulable by simple Info-war tactics to
shift its center of gravity, or "balance point" --
e.g, if requiring welfare recipients to work sounds "extreme" to you, I can make it
seem positively "centrist" by demanding that welfare recipients
perform forced labor &middot &middot &middot in prison.

Having thus rebuked journalists, I've decided to adopt a similar cop-out :-) &nbsp
In these (recent) writings, I often reach for explanatory frameworks of
Evolution, and then occasionally "balance" that with
some perspectives from Religion. Most often,
my religious sources draw on orthodox Catholic thought.
I do so partly because those sources are easily web-accessible, and
partly because they provide a rich and continuous intellectual tradition,
in which ethics is an important strand.
&nbsp (I'm not surprised that
Alasdair MacIntyre
wound up at Notre Dame, and given my own turn to animal psychology as a
conceptual antidote to cultural software's anthropomorphic biases,
I suspect, when I read his
Dependent Rational Animals: Why Human Beings Need the Virtues,
I will find many areas of convergent thought.)

Regarding my own ideological biases, I'm the product of many embeddings,
and often, my adaptations have been characterized by outright opposition,
or by guarded resistance, to what I experienced -- both ethically and emotionally --
as "cognitive coercion". (By age 7, I was acutely aware of, and hurt by,
the tool of language changing how I thought, and what I thought about.
In hindsight, from my present theory-laden interpretive framework,
I'd say I felt that the practice of using and being used by language
was interfering with my developmental autonomy.)
Also, as a young child viewing the pervasive hypocrisy of adult society, and
reading about the Crusades, the Inquisition, and the "New World conquest"
(I didn't learn the word, "genocide" until much later), I became quite hostile
to "Christianity". Only in the 1990's, as a result of conversations with
George&nbsp
Tinker
(a Native American theologian with the Iliff School of Theology),
did I develop a more nuanced and realistic attitude toward Christianity.
Tinker originally had intended to title one of his books,
"Columbus and Coyote: Case Studies in the
Confusion of Gospel and Culture".
Instead, its title eventually wound up being,
Missionary Conquest: The Gospel and Native American Cultural Genocide.
Although the specific missionaries in Tinker's case studies
had the best of intentions toward Native Americans,
by conflating their religious ideology with their cultural ideology,
the consequences of their actions were devastating. I suggest that,
although those missionaries "conquered the New World",
they were, themselves, victims -- conquered by
certain damaging aspects of their respective cultural ideologies.
To some extent, all of us are both victims -- and instruments -- of Info-war.

From "a typical American's viewpoint", my politics appear
as a suspiciously eclectic mix of "liberal" and "conservative",
and my theology verges on "atheism". I neither "believe in God"
nor "disbelieve in God". Nor do I accept the imposition of that two-dimensional
framework, then "balance" comfortably at its agnostic center,
from whence I "question the existence of God".
As with scientific experiments and opinion polls, the theological answers you get
depend on exactly which questions you ask. Here's what I do believe: &nbsp
I believe those are the wrong questions to ask.
I believe language is a lousy tool with which to ask theological questions.
And contemplating my impending death from multiple myeloma,
I believe what happens to the world after I die,
is far more important than what happens to the "I" after I die.
(I would resist being labelled a Christian more strongly than I'd
resist being labelled a cognitive Taoist or Buddhist.)

I'm disappointed that so many Americans find Evolution and Religion incompatible &middot
&middot &middot and that they are certain their conclusions are infallible.
To those militant atheists who claim to have scientifically proven the non-existence
of God, I prescribe a dose of intellectual honesty to alleviate your delusion.
By definition, it is impossible for Science to prove that God does not exist.
To religious people who feel threatened by various ideas of evolution,
it seems to me that Ye are of Little Faith. If God exists, and is all powerful,
then surely God can promulgate physical laws, besides religious laws.
Or, in your arrogance, do you command God to be a micro-manager? If you throw a ball,
do you require God to follow it like a dog, guiding its trajectory every micro-second,
so that it seems to follow a (nonexistent) Physical Law of Gravity,
which you have prohibited God from promulgating?
Does God have the power to create the entire universe on the date calculated
by Bishop Usher, yet lack the power to insert an apparently evolutionary history
into that creation, and lack the power to "start the evolutionary process"
at the moment of creation?

I (and many others) dispute the theological infallibility of Popes,
but few would accuse the Vatican of "Godless atheism".
Regarding the alleged incompatibility between evolution and creation,
the current (2008) Pope, Benedict XVI, stated,
"They are presented as alternatives that exclude each other ...
This clash is an absurdity." The previous Pope, John Paul II,
reasserted the longstanding Catholic interpretation that there is
"no [inherent] opposition between evolution and the doctrine of the faith."
Cardinal Paul Poupard, the head of the Pontifical Council for Culture,
correctly identified the source of the clash -- disagreement over which
conceptual framework
should be used when interpreting the Bible's account of creation:
"Fundamentalists want to give a scientific meaning to words that had no scientific aim." Clearly, this clash offers another case study in the
Confusion of Gospel and Culture. Regarding
interpretive frameworks I suggest that Catholics, fundamentalists,
and atheists all might benefit from reading what the
Catholic Encyclopedia
has to say about
Hermeneutics.
I dispute some of the "answers" it gives, but it "asks many of the right questions"
about choosing an adequate frame of reference to interpret
the meaning of texts -- questions which most Americans don't even think to ask.
Written in 1910, that article on hermeneutics offers timeless insights on
human cognitive fallibilities that modern Americans -- in our arrogant certainty
that we always infallibly apply "the correct" conceptual tools --
seem never to have learned: &nbsp
"the controversies between Jews and Christians, between Christians and Rationalists, between Catholics and Protestants, are in the end brought back to hermeneutic questions."
.

By definition, it is impossible for Science to prove that God does not exist.
Likewise, it is impossible for Science to prove that Mind and Brain are identical.
(Although my writings occasionally may fall into this
reductionist equivalence as a convenient conceptual lapse.)

NOTES and REFERENCES
(The &nbsp _^_ &nbsp symbols
are pointers back into the text where that NOTE was mentioned.)

_^_
Topic -- Ideological Operating Systems;
cognitive library routines of dynamic, shared Cultural Software;
collective social computation: &nbsp
The idea of introducing "socially-approved programming"
into an individualís mind is not new. In America, at various times,
this has been an explicit goal of schooling, sports, prisons,
and mass media. Inculcating these "habits of mind" was
considered "good" -- both for the individual, and for society.
In contrast, the Korean War "brainwashing" of captured Americans
was considered "bad" for those individuals, even if it was "good"
for the society of communist China and North Korea.
Psychologists and sociologists (and then social psychologists) described
how individuals "internalize" social strictures and structures.
In the 1960ís,
the Free Speech Movement was partly a rebellion against what students
viewed as attempts by their society and their parents to "program"
them into becoming mass-produced "products",
gears in the "machine"
of the American military-industrial complex.
Various religious "cults" are accused of brainwashing vulnerable people
into becoming members, and attempts by family members to "rescue"
such converts from their "false consciousness" have produced a new profession
of those who try to "de-program" those converts.

"Brain and Culture: Neurobiology, Ideology, and Social Change" -- by Bruce E. Wexler
(2006; MIT Press)
seems superb. (As of 7/08, Iíve just begun reading it.)
Two
reviews
have criticized Wexlerís notion that ideology is imprinted
onto young brains when they are plastic; and thereafter, the adult brain
resists aspects of reality that conflict with its internal ideological model.
But provisionally, I think Wexlerís thesis explains much: Brain plasticity
probably varies widely among adults, and some people (committed to
"lifelong learning") arguably have structured their minds so that
their cognitive architectures are modular and layered
(in contrast to a "monolithic kernel"), i.e, so
only minimal, deep structures are rigid, and other portions of their brains
remain flexible to encompass new information. In effect, some adults may
achieve ideological evolution without necessitating
neurobiological revolution. &nbsp Major neurobiological
restructurings surely involve "cognitive costs" that cause
temporary reductions in functionality. Evolutionarily, that would be
irrelevant for an (already dependent) infant, but could entail significant
risk for an independent adult.

Neuroanatomy aside, the belief that young minds are more malleable than
adult minds is widespread; hence many groups in many societies have
practiced a "Get 'em while they're young" approach. School textbooks
are a prime example, and school curricula are a site of social conflict.
Examples include Turkish schooling about the ("alleged") Armenian genocide,
and Japanese schooling regarding Japan's ("alleged") militarism and
atrocities in World War II. America has long struggled with conflicts
about teaching evolution vs. creationism. More recently, American school
textbooks and curricula have been targeted in a different Info-War campaign
about "environmental education" -- featuring Exxon and other energy special interests
as prominent funders.

The notion of human minds as hosts for cultural software
raises issues of cultures morphing from symbiotes into parasites,
[cultural pathology][Info-war Escalation]
co-evolution, group-selection,
&middot &middot &middot not to mention political theory, psychology,
and constraints on the ability of an individual to transcend the
prevailing cultural ideas. The following books might address some of
those questions. Caveat: I have not even skimmed them.

_^_
I'm assigning the term, "Embed", my own technical Definition:
In computer architectures, an "embedded" system is one designed specifically
to "perceive" and "respond" to a particular physical environment. By analogy,
a human "embedded" in a particular social environment for a sufficiently
long time, must change/develop due to that long-term stimulus exposure.
As with any neural network, the human individual may adapt in many ways:
One adaptation is to ignore the new stimulii. (But when those stimulii
are removed, that person will preferentially ignore -- via "habituation" --
similar stimulii in their previous environment.) Another adaptation is the
"Stockholm Syndrome"
whereby the person accepts the perspectives and values of the enclosing group.
Alternatively, the person might engage in "oppositional decoding",
and resist that groupís interpretations of reality. Obviously,
there are numerous eclectic choices as well.

My point, in using the term, "embed", is to indicate that a particular
(sub-)cultural environment has stimulated an individual long enough, and strongly enough,
that it has engendered a long-term cognitive/interpretive adaptation.
&nbsp NOTE: &nbsp Besides being embedded in a social environment consisting of
individuals and institutions, we are embedded in our own (sub-)culture's conceptual
toolbox -- our Cultural Software.
Back to: &nbsp =>Americans embedded &nbsp &nbsp
=>Embedded in Technological environments &nbsp &nbsp
=>Defining Info-war &nbsp &nbsp
=>Ideology & Objectivity

_^_Pathological Info-war Escalation :
(First, a note on censorship that's relevant to all other forms of Info-war.)
How do we (who?) define "defensive" censorship of information flows?
And how can unintended consequences of such censorship be monitored and prevented?
If "defensive" information censorship is solely a response against
"offensive" information subsidies, how do we (who?) define
and detect those info subsidies, and ensure the "defensive" response is proportionate?
To what extent does transmission of accurate information
amplify or escalate a phenomenon,
vs. merely reflect and describe it?
Such questions are important in situations of terrorism, teen suicides,
financial panics, financial bubbles, economic shortages, epidemics,
social movements, and political campaigns.
&nbsp (Does art imitate/describe life, or does
life escalate art?)

Arguably, the most important questions about information escalation
vs. information reflection involve our larger "Culture Wars"
and "Domestic Political Info-Wars": Exactly how do we define and detect
when "opponents" have launched an offensive information subsidy --
rather than what they thought we'd (correctly) perceive as a defensive "proportionate response"
to one of our earlier attacks? Are we certain this was intentional,
rather than inadvertent? Are we certain the responsible parties really
are "opponents"? If not, might our response turn them into opponents?

Definition: I'm purposely using the term Escalation
both in its common sense, and also in its technical scientific sense,
which defines Escalation as,
"the process by which species adapt to, or are limited by, their enemies,
as the latter increase in abilities to acquire and retain resources". &nbsp &nbsp &nbsp ["Evolution and Escalation: An Ecological History of Life" --
by Geerat J. Vermeij (1987; Princeton Univ. Press)]
In this case, the "species" are forms of Cultural Software.
Their most important "resources" are information, and certain
capabilities for attacking, defending, capturing,
maintaining, and propagating information resources --
namely individual and institutional "hosts", various communicative technologies,
money and emotion as "energy sources", and various symbols and concepts.
Mental attention is also a finite and valuable "resource", hence
distraction and diversion are important Denial of Service attack
capabilities -- even when deception is not involved.
This may offer a rigorous formal definition of the informal term, "Media Ecology".

The low-level processes of Escalation -- tactical Info-war --
induce higher-level Evolutionary System Dynamics --
for example, Cultural Software may converge
into strategically-stable Cultural Pathologies!
American's Incredible Shrinking Attention Span
-- caused by the economic competition for mental attention --
is one such cultural pathology.
In the political realm, Evolutionary System Dynamics explains much of the
posturing and frequent re-positioning of America's
Democratic and Republican parties: Their "enduring principles"
are defined by the pragmatic recognition that they are engaged in an Info-war
struggle for strategic viability: &nbsp Which economic and emotional energies
are available to power which ideological states, and how successfully
can they "herd" the corresponding individual and institutional "hosts"
or "subscribers" who will populate those ideological states?
Party strategists certainly are aware of these issues, although they probably
don't frame them in those terms. But some party strategists may not
be aware of their own Info-war's role in causing dysfunction. Rather, it may
appear to them that their opponents' "offensive" Info-war is causing the
problem, and they are merely responding via "defensive" Info-war.
Unfortunately, the continuous economicDenial of Service attacks
(via distraction and diversion), on which political Info-war barrages
are superimposed, have severely damaged our collective political intelligence.
Also see: Note on Cultural Pathology
&nbsp &nbsp Note on Cultural Software Back to: =>Censorship exacerbating China Famine
&nbsp &nbsp =>Defining Info-war

_^_Cultural Pathology: &nbsp I fear that
particular cultural software ideological variants
may have evolved from being helpful (or overall neutral) symbiotes
into dangerous parasites; software that risks destroying their "host hardware" --
i.e, individual humans, and perhaps the entire human species.
In browsing the web, I found that a similar notion may have been broached in
"Breaking the Spell: Religion as a Natural Phenomenon" -- by Daniel C. Dennett (2007).
(Caveat: I have not read it.) If, indeed, Dennett suggests that "Religion"
has morphed into a dangerous parasite, I'm somewhat skeptical
(although since childhood, I've been acutely aware of the harm done in the name of
religion). Moreover, if and when the "Clash of Civilizations" crowd applies
this notion to Islam specifically, I'll probably remain skeptical.
Instead, the forms of "Religion" I nominate as
Most-Likely-to-Destroy-Humanity are
Consumerism, Greed,
Irresponsibility, and Economic Darwinism! &nbsp
I hope our other Religions evolve to counter those
destructive Cultural Pathogens, and I hope my writings
may help catalyze that constructive evolution of our religious understandings.
Also see: Note on Pathological Info-war Escalation
&nbsp &nbsp Note on Cultural Software

_^_
Of the world population of about 6.5 billion, 57% is malnourished,
compared with 20% of a world population of 2.5 billion in 1950.
D. Pimentel, et al. TODO -- get title. in
Human Ecology, Dec. 2007
See:
Cornell Univ. News Release

_^_
36 million annual deaths from starvation or malnutrition, was 58% of human death
from all causes in 2006. Per Jean Ziegler, U.N. Special Rapporteur on the Right to Food.
See:
Wiki: Malnutrition

_^_
To quote Barwise and Perry (below): "the standard view of logic, derived from
Frege, Russell, Tarski, and work in mathematical logic,
is completely inappropriate ... for more ordinary uses of language." &nbsp
"How far this silliness can go is evident in recent analytic philosophy,"
where "Elementary Logic", a 1972 Oxford book by an eminent logician,
baldly asserts that, "sentences [are] the objects with which
logic deals ... thoughts ... do not exist."
This is a prime example of how even smart people can unknowingly
become victims of their own conceptual models; their thoughts imprisoned
by the mental tools they hoped would set them free from error and confusion.
Thankfully, this trap binding Western cultural software to a syntactic
logic model was released in 1983, when Barwise and Perry recognized that:
"Meaning does not exist solely within words and sentences, but resides largely in
the situation and the attitudes brought to it by those involved."
A more recent edition of their classic book is in print:
"Situations and Attitudes" -- by Jon Barwise and John Perry
(1998; Center for the Study of Language and Information; Stanford University)
It's noteworthy that Barwise and Perry credit
ecological psychology (studies of animal-environment interactions,
["The Ecological Approach to Visual Perception" --
by James J. Gibson (Houghton-Mifflin; 1979)]
for helping them transcend the blinders imposed by
their culture's previous conceptual tool.
That's probably because (some) animal scientists have trained themselves
to become more acutely aware -- more mindful -- of the
myriad subtle ways that anthropomorphic bias in their cultural software
can distort their thinking about animals. I assert that
anthropomorphic and cultural bias is even more problematic,
when we apply our conceptual tools in a self-referential manner,
to examine our own culture and our selves.

_^_
Role of press censorship in China's "Great Leap Forward" causing 30 million deaths from starvation.
Amartya Sen. "Poverty and Famines: an Essay on Entitlement and Deprivation",
1981.

_^_
Newtonian physics can be a very useful analytical framework.
But if you fail to recognize its limits, and insist
that reality must not exhibit relativistic effects,
your physics demonstration may cause catastrophic consequences.

"UC Davis makes no warranties, either expressed or implied,
concerning the accuracy, completeness, reliability or suitability
of the information contained on these Web pages
or of the security or privacy of any information collected by these Web pages.
All views expressed in this Web site are those of the author
and not UC Davis."

Rick Crawford can be contacted via email. Start with my last name,
then an "at" sign. Follow that with &nbsp ".cs.ucdavis.edu"

I began writing this page in April 2008 to introduce and "glue together" some of
my previous writings. As always when trying to teach others, I learn more about
my own ignorance, my inappropriate application of conceptual tools,
and my own unexamined or poorly-justified assumptions.
This page has evolved beyond merely a quick intro to my older writings.
I continue to work on it, trying to produce a more clear, coherent,
more-encompassing, and more useful synthesis that ties collective ethics,
shared conceptual tools and interpretive frameworks,
individual emotions, and technology together.
Hopefully, as my own thinking becomes clearer,
I can present this material via an organized structure
readers can more readily grasp.