Madison mail mess-up

Tom Slone <potency@violet.berkeley.edu>Mon, 2 Sep 91 17:12:04 PDT

Madison, Nebraska is reportedly in the midst of automating is mail system, but
the automation has reportedly force people to change their addresses
repeatedly. The conversion will reportedly be finished by 1995! Meanwhile
residents are not usre if they're getting all their mail. Residents of the
town of Madison are forced to have their mail delivered to boxes rather than
their homes, but some rural routes have street addresses. One resident, Mary
Duby, has three addresses listed in the phone book due to the apparently due to
the postal automation: two boxes and a street address. Duby said, "What a
mess. Originally I had a street address. Then I had a mailbox put up and I
was put on the rural route." [Source: an AP story reported in the San Jose
Mercury News 2Sep91]

RISKS of using electronic mail

Many of us have become dependent on electronic mail as vehicle for serious
discussions. Our addresses become widely distributed and stored in many
colleague's mail files. This is a serious exposure to risk. If one moves one
may find that one's former employer feels insulted by the announcement that one
has moved on to other pastures and refuses to forward electronic mail. The
incorrect mail address may persist in electronic files for many years and those
who write to you may find that you are an "unknown user". What is needed is a
personal communication system, one where the individual's address is
independent of his (or her) location on the computer network.
David Lorge Parnas (no longer at qucis.queensu.ca)

Re: RISKS of using electronic mail, and universal addressing

"Peter G. Neumann" <neumann@csl.sri.com>Tue, 3 Sep 91 11:30:10 PDT

There have been various proposals for life-time unique IDs — for EMail, for
telephone numbers, and even for Postal Delivery, that would transcend
geographical locations and relocations, etc. All sorts of interesting problems
are raised regarding decentralized implementations and whom you have to trust
with what, what happens if one of the decentralized sites is down and whether
the implementations are suffiently fault tolerant to survive multiple outages,
what to do about authorizations and junk mail, revocation, etc. But it
certainly would be nice. This reminds me of some of the problems experienced
long ago in designing capability based systems where capabilities have
identifiers that are unique for the lifetime of the system. So, there is
actually significant experience in dealing with David's suggestion, in a
broader context — but not yet in the Internet, that wonderful sandbox of
the past that is still the sandbox of the future.

In RISKS-FORUM Digest (Saturday 31 August 1991 Volume 12 : Issue 21), you
asked about "+&*#$" as a possible New Hampshire license plate.
While it's true that "+" (plus) and "&" (ampersand) are valid characters on
a New Hampshire license plate, as is "-" (dash or minus), I'm pretty sure
that the other characters you surmise (*, #, and $) are NOT permitted. I'd
have to ask the DMV to be sure, however, which I can do if it's important.
I'm amused by your reference to "other nonASCII graphics" — while it's true
that some other states use bizarre characters on license plates (such as the
Lone Star on the Texas plates, or the lobster on Maine plates), usually this
is not "user selectable".
New York State allows an embedded space character in license plates. This is
as big a problem, I'm sure, for some other states as New Hampshire's use of the
printing but unusual characters that are accepted here. [Live Free Or Die!]
Dr. Thomas P. Blinn, Digital Equipment Corporation, Digital Drive — MKO2-2/F10
Merrimack, New Hampshire 03054 ...!decwrl!dr.enet.dec.com!blinn (603) 884-4865

``More seriously, this poses all sorts of interesting RISKs issues.''
In a previous life I had occasion to work with someone who had worked on such a
project in California. The system apparently went quite far through the
development life-cycle, but then, at the very end was dumped without being
deployed.
Such a system could be used to lower fire risks by shutting
down natural gas and power distribution networks, to protect
computer systems by retracting disk heads, to start a
controlled shut down of factory processes, to divert aircraft,etc.
What happened instead was that many of those people responsible for performing
these vital functions took advantage of the early warning to leave work to be
with and protect their families. Thus, the system ended at "proof of concept",
due to the significant risks associated with loss of key personnel at exactly
the worst possible time.
Incidentally, the system apparently did use a network of sensors, but took
advantage of the fact that the shock wave moves relatively slowly (45 - 60 mph
comes to mind, but it has been a few years).
Floyd Ferguson floydf@iphase.com

Phil Agre (pagre@weber.ucsd.edu) provides some welcome warnings about
misinterpretations of risk perception research. I share his concern that
findings that lay people evaluate risks differently than experts are often
viewed as evidence that ``ordinary people are irrational.'' There are usually
several explanations for the discrepancy between lay and expert judgments and
the data are rarely conclusive as to which explanation is best. Premature
attributions of irrationality are a significant risk in risk perception
research because, as Agre suggests, attributing irrational judgment to ordinary
people can make them seem responsible for the morbidity and mortality they
suffer.
This said, Agre's diagnosis of a ``hidden agenda inside the notion of `risk'''
was inaccurate. Agre says that ``The whole rhetoric of `risk' started out as
corporate PR'' specifically the well-known advertisements by Mobil Oil. The
concept of risk in the sense used in risk perception studies dates (at least)
from the beginnings of epidemiology and from the integration of probability
into the theory of insurance in the 18th century. Psychological research on
risk perception and probability judgments was well established when Mobil ran
its ads. Agre believes that it is a conclusion of risk perception research
that ``ordinary people are unwilling to accept any risk at all.'' I have never
seen a statement like this in the risk perception literature and I wonder if
Agre can find one. Agre says that ``talk about `levels of risk' and the like
erases the distinction between the experts' assessments of risk and the
assessments that ordinary people are in a position to make.'' The point of this
field is to understand how one aspect of our positions in the world — our
cognitive limitations and our limited access to information — force us to
construct simplified models of the world. All of us need to make decisions
without the benefit of professional knowledge: how do we cope? Risk assessment
research _begins_ with a distinction between the cognitive position of the
expert and lay person, it doesn't erase it. By the way, it isn't just ordinary
people who construct simplified models: there are many studies showing that
experts also have great difficulty in judging probabilities and coping with
uncertainty.
Agre describes risk perception research as ``ideology, made into a
profession.'' I hope he sees that there are also significant empirical
phenomena that need explanations, and quickly if possible. For example, it
appears that adolescent gay males have not adopted the safe sex norms accepted
by older gay male cohorts. If so, why not? Health psychologists working with
these young men think that these kids believe (inaccurately!) that HIV
infection risks apply only to older gay men. This is readily understandable:
the long incubation period of HIV infection means that an adolescent will
rarely encounter a peer with AIDS, and therefore does not perceive himself to
be at risk. This explanation is an example of the availability heuristic, the
idea that probability judgments are affected by our ability to recall vivid
exemplars of the risk in question. Is this really why these kids engage in
risk taking? I don't know: it is hard to design a study that can powerfully
discriminate among many competing plausible explanations. Agre says that the
findings of discrepancies between expert and lay judgments are ``easily
explained''. But if he wants us to believe his explanations, as opposed to the
others on offer, he will need some data that show why they are better.
Agre oversimplifies when he reduces the political implication of risk
perception research to ``corporate PR''. Many risk perception researchers
share his desire for a ``socially responsible'' technology in which people are
``told the truth, ...able to find the world intelligible and sane, [are]
consulted about things that change their lives, [are not] subjected to hazards
without their consent, and generally [are] able to participate in collective
decisions about issues of technology and social change''. All of these goals
will require that technical information be communicated to people who are not
specialists in the relevant technologies. If risk perception research can
clarify how non-specialists understand risk information, we may get an idea
about how to communicate the information more clearly.
William Gardner, Law & Psychiatry Research, Department of Psychiatry,
University of Pittsburgh School of Medicine (wpg1@unix.cis.pitt.edu)

Phil Agre's posting reminded me of a table in "The Mission Profile," in _IEEE
Spectrum_, October, 1981. It describes "consumer" expectations for various
systems. I summarize (the original table had more words and a column for
availability):
System Representative Useful Life
Failure Rates of System
-------------------------------------------------------------
Automatic Teller 1 per 18mo. >15 years
Teller
Telephone 3 min/yr >15 years
Chemical Plant Less than 3% >15 years
Electric 12 min/mo. during >15 years
Power sys. excessive demand or storms
Television Set 3-10% during warranty 7-10 years
period. May continue (based on use)
with degraded perf.
Auto: engine 1% during warranty life of car
control
Air Traffic 2.9 unsched. interrupts >15 years
Control per month lasting >1 min.
Minuteman III 1 per 1.9 billion part up to time missle is
missile hours in system with capable of striking a
8000 critical parts prescribed target
Pacemaker 1 per month among 8-15 years depending
170,000 devices on type of pacemaker
[I'm tempted to say "lifetime" but
that would probably be crude--CHS]
Operating System 1/hr to 1/mo runtime of program
I think these figures, although subjective and somewhat dated, illustrate the
range of acceptance of failure for various systems. They are not necessarily
rational or related to any more objective ratings, such as the number of deaths
caused per year by each system (a figure hard to interpret for a Minuteman
III). But, isn't *acceptance* of risk by *definition* a social phenomenon
rather than a scientific one? Death is not the only metric.
The corporate PR firms that started advertising based on risk reduction
believed that safety was marketable. Wouldn't our jobs be much easier if more
people believed that risk reduction was worth paying for?
Craig Seidel, SRI International

DIAC-92 CALL FOR PAPERS AND PARTICIPATION

<douglas@atc.boeing.com>Fri, 30 Aug 91 13:08:42 PDT

Call for Papers and Proposals
DIRECTIONS AND IMPLICATIONS OF ADVANCED COMPUTING
DIAC-92 Berkeley, California May 2 - 3, 1992
Computer technology significantly affects most activities in society, including
schooling, health care, military practice, work, communication, and laws and
law enforcement. The DIAC conference considers the implications of technical
advancements on society in a broad social context that encompasses ethics,
economics, and politics. The conference seeks to address the the relationship
between technology and society. Papers that address directly the relationship
between technology and policy, and papers on the ethics and values of computing
are especially desired. Papers and workshop proposals that build on previous
DIAC presentations are encouraged. Reports on work in progress or suggestions
for future work as well as appropriate surveys and applications will also be
considered. The following topics should be regarded as general guidelines for
paper or workshop topics:
RESEARCH DIRECTIONS DEFENSE APPLICATIONS
+ Research Funding + AI & Neural Net Applications
+ Software Development + Autonomous Weapons Systems
Methodologies + Virtual Reality
+ Professional responsibility + Uses of Models & Simulations
COMPUTING IN A DEMOCRATIC SOCIETY COMPUTERS IN THE PUBLIC INTEREST
+ Community Access + Computing for the Disabled
+ Computerized Voting + Computers and the Environment
+ Civil Liberties + Arbitration & Conflict Resolution
+ Computing & the Law + Computing in Education
+ Computing & Workplace + Software Safety
Submissions will be read by members of the program committee, with the
assistance of outside referees. The program committee includes David Bellin
(consultant), Eric Gutstein (U. WI), Batya Friedman (Mills College), Jonathan
Jacky (U. WA), Deborah Johnson (Rensselaer Polytechnic Inst.), Richard Ladner
(U. WA), Dianne Martin (George Washington U.), Judith Perrolle (Northeastern
U.) Marc Rotenberg (CPSR), Douglas Schuler (Boeing Computer Services), Barbara
Simons (IBM), Lucy Suchman (Xerox), Karen Wieckert (U. CA. Irvine), and Terry
Winograd (Stanford).
Accepted papers will be presented on May 2. Accepted workshops will be
conducted on May 3. Complete papers should include an abstract and should not
exceed 6000 words. Proposals for workshops should include title, purpose,
intended agenda, and references. Workshops will be two hours in length.
Submissions will be judged on significance, clarity, insight, and originality.
Papers and/or proposals (4 copies) are due by November 1, 1991. Notices of
acceptance or rejection will be mailed by January 15, 1992. Camera ready copy
is due by March 1, 1992. Send papers to Douglas Schuler, Boeing Computer
Services, MS 7L-64, P.O. 24346, Seattle, WA 98124-0346. For more information
contact Doug Schuler (206-632-1659 (H), 206-865-3832 (W)
dschuler@june.cs.washington.edu).
Proceedings will be distributed at the symposium, and will be available by
mail. The DIAC-87, DIAC-88, and DIAC-90 proceedings are published by Ablex
Publishing Company. Publishing the DIAC-92 proceedings is also planned.
Sponsored by Computer Professionals for Social Responsibility
P.O. Box 717, Palo Alto, CA 94301
DIAC-92 is co-sponsored by the American Association for Artificial
Intelligence, and the Boston Computer Society Social Impact Group, in
cooperation with ACM SIGCHI and ACM SIGCAS.