I'm going to talk today about
metaphors. Everybody here has heard and used the word
metaphor plenty in the course of your educational experience (and
the amount of educational experience in this room is pretty staggering,
so that makes lots of uses of the word). To quote a famous sage, "It's
a common word, something you use everyday." It generally gets stored in
memory with all the other stuff you learn in literature classes, like
simile, plot, characters, rhyme, meter, and so on. And then it gets
forgotten, or at least not looked at often, until and unless you do
something literary. I'm here today to suggest that in fact there is a
human phenomenon (which I will call metaphor, though what
name you give it doesn't really matter much) that is much more important
to everybody than all this would imply.

In particular, it's a very important
thing for anyone professionally associated with computing to bear in
mind. That makes this a sort of practical lecture. I think it's also an
interesting topic in its own right, so I hope it will also be an
intellectually stimulating lecture as well. We'll see, I suppose.

Why I'm giving this lecture

I'm a linguist, which is to say I
study language -- not so much to learn to speak lots of languages, but
rather to try to figure out just how language works in the mysterious
process of human communication. One of my special areas of linguistic
research is Semantics, the study of meaning, and within
that area I tend to specialize in Metaphor. That's not really saying
much, because what I've discovered (and I'm far from the first to
discover it) is that metaphor is such a pervasive phenomenon that
specializing in it is not a very restrictive act. Metaphor is involved
in practically everything.

I'm also a computer fan. I
started computing 25 years ago, when I was in graduate school, and I've
been thinking about computers for most of the intervening years, even when
I didn't have one of my own. Nowadays, when even someone on a professor's
salary can afford to have a computer that's bigger and better than the
monster that took up the basement of the Engineering School building at
the University of Washington in 1967, I do something
more than think about computing. But I haven't stopped thinking.

When I got my first computer, I plunged in and learned as much as I
could about it. I found that some of the things I'd figured out myself
in the decade or so I was away from the computer world were right -- and
that quite a few of those things were ridiculously wrong. I also noted
that many of the things I was learning seemed familiar somehow, even
though I had really never heard of them before. This puzzled me; enough
to make me pay attention to how I was learning about computing. What I
found was that I was using my linguistic knowledge to structure and
understand (and store and access) the computing information I was
learning. I was treating this as just another language to learn. That
made it easy, since I already knew how to do that. In fact, computing
was orders of magnitude easier to learn and understand than a natural
language. This in turn made me wonder how I was doing this -- I mean,
you're not really supposed to be able to pretend black is white and get
away with it, let alone make it pay.

What I was doing was using a
metaphor. It turns out that that particular metaphor (COMPUTING
IS LANGUAGE) is by no means the only useful one around, though it
is a bit specialized in this culture, since American society is rather
naïve about language. Since then, I've found lots more, and I've also
found that people can often understand things better if they use a
metaphor and know that they're using it. In fact, the more metaphors you
use, the better. I'm giving this lecture, therefore, to encourage you all
to:

think about the metaphors you use in talking about
computers, and to

consider broadening your repertoire.

What I'm assuming about my audience

I'm assuming that you are English
speakers (native English speakers for the most part) and that you're
interested in computers (this is not a very risky assumption). I'm
assuming that you're professionally concerned with the problems of
communicating about computers -- you're involved, after all, in
educational activities that center around computing. I'm assuming
you've all experienced (probably first in yourself and then in others)
the particular species of confusion, irritation, and despair that we
often call "computer anxiety". And I'm assuming that you've also
experienced the difficulties that attend attempts to educate people
about computing.

I mention these assumptions because
I'm going to cheat a bit in this talk. I'm not going to cite ergonomic
or psychological studies to prove my contentions about how people think
about computing. Instead, I'm going to appeal to your experience (or
what I fondly believe is your experience); both your own personal
history of computing and your experience in educating others in
computing. This, you see, is how metaphors work -- we all believe,
despite overwhelming evidence to the contrary, that we share some ideas
and experiences, and it is by means of reference to those concepts that
we manage to communicate. So I'm going to both talk about metaphors and
use them today. I have no choice in the matter. Nobody does.

Some Questions About Metaphors

What's A Metaphor?

You have to distinguish (or at least I find that I have to) between
several different meanings of the word metaphor. To begin
with, there is the human cognitive capacity for metaphor itself, which is
what the lecture is all about. This phenomenon is real, but it's hard to
get at -- very abstract -- and can't really be investigated by itself.
To find out how it works, you have to look at more concrete phenomena. The
most concrete is another use of the word metaphor, in which a word
(say, spend) which is defined with respect to one kind of thing (in
this case, money) is used in context with a completely different
kind of thing (say, time, as in I spent two hours on that
report.) I call this an instantiation of a metaphor.
Note that it's linguistic in nature, and has to do with what words
we use.

The abstract phenomenon of metaphor and the instantiation of a metaphor
are the abstract and concrete poles of the uses of the word
metaphor. In between them is one other sense of the word, one
that's very important because it represents the cognitive mapping that we
use when we use metaphors, and because it controls or
licenses the actual instantiations. I call this inbetween
level the Metaphor Theme, and it is this that is frequently
meant when people talk about metaphor. In the case of the instantiation
above, the metaphor theme is TIME IS MONEY. This just means
that the cognitive/semantic area (frame) of TIME is treated
semantically as if it were the same as (there are actually
differences, but never mind that now) the area or frame of
MONEY. This is like a semantic equation (and there are
differences there, too).

Using a metaphor theme means that we can use words that are defined with
respect to one frame in talking about concepts and words defined with
respect to another. In effect, we enlarge the target frame. Thus (to
continue with the example), we can talk about saving, losing,
budgeting, wasting, and just plain having time, even though
time is not really (literally) the kind of thing for which
these predicates are defined.

To summarize, the three levels are:

Metaphor as a human cognitive phenomenon.

Individual Metaphor Themes (e.g, TIME IS MONEY

).

Instantiations of metaphor themes (e.g, He spent an
hour on that)

What we are interested in here, then, is what metaphor themes exist that
start off COMPUTERS ARE ... (or COMPUTING IS,
etc.). We need to solve, in other words, a metaphoric equation; and, at
that, a Diophantine one -- one that has a number of possible solutions.
As is usually the case with such equations, not all of the solutions are
equally useful.

Why Are Metaphors So Important?

It's not difficult to see that much
of what we say about almost any topic is in fact metaphoric. That is,
once we get attuned to metaphors -- get used to noticing when the
language being used in some discourse is not literally defined with
respect to the topic under discussion -- we begin to find them
everywhere. This is one of those cognitive discoveries that can burst
on someone as a satori-like experience, and in fact most experiential
philosophies like
Buddhism or Taoism have some fairly pungent things to
say about metaphor, and its peculiar problems. I'll content myself here
with repeating something from
Chuang Tzu ...

"Snares are used to catch game; once you have

the rabbit, you can dispense with the snare.

"Fish-traps are used to catch fish; once you have

the fish, you don't need the fish-trap anymore.

"Words are used to catch ideas; once you have

the idea, you can throw the word away.

"Oh, how I wish I knew someone who had thrown all

his words away, so I could talk to him about ideas!"

... which sums up very neatly the problems of dealing with metaphor
while using language. Words and phrases have to be defined in terms of
some semantic frame, and the structure of that frame pretty much
delimits what we can say and how we can view any topic. That makes it
very, very difficult to talk about metaphor, and equally important to do
so. In particular, choosing the right metaphor can make a terrific
difference in how (or even whether) our attempts at communication are
perceived.

How Do You Find Out How They Work?

There are many ways, but the one I
use is linguistic in nature, since I'm a linguist. As they say in
Linguistics, pick a language at random, say, English. And pick a topic,
say, computing. Collect samples of discourse in that language about that
topic. Then look at the words and phrases used, and seek out their
literal senses (i.e, discover the frames where they are defined). Then
posit metaphor themes that license such uses of the words and phrases.
Then test these themes to see if you're right. We'll be seeing a number
of themes today. I'm not going to try to categorize them; inventing
categories for metaphors is a fun game, but ultimately frustrating,
since they're always just more metaphors. We'll just stroll about and
see what we find.

Metaphors About Computers

Novelty and the role of metaphor

It's a truism that new things are
hard to talk about -- our experience moves much faster than our language
does -- and few things are newer than computers. Just 60 years ago,
there were NO computers in the world. Anywhere. There
was talk about them in recondite academic technical circles, but the
number of people who had even heard such talk (let alone understood it)
was extremely small. I don't have to tell you about the results of the
ensuing half-century; they're all around us. Naturally, we have had to
cope, and the major way we have done so is by using metaphors.

These metaphors have typically been
invented spontaneously by people who understood some aspect of computing
in order to communicate with others, to be able to tell them about
something they didn't already understand. This might be a case of one
hacker telling another one about a neat new algorithm, or it might be a
case of somebody writing a users' manual. You can think of lots of
other cases. The difference between the two cases I mention here is in
the presuppositions that are assumed by the speaker (or writer) to be
part of what the listener (or reader) believes. I say both
speaker and writer because metaphors are by
no means a matter exclusively of the written language; however, since
writings are more permanent than speakings and therefore tend to
dominate the evidence, we sometimes lose sight of the relative
proportions of language use -- millions of words spoken for each one
written -- and I want to emphasize here how important it is for you to
understand that I'm talking about all language use here.

Technical language is full of
metaphors, but they are pretty impenetrable to outsiders. Think about
what's meant by (say) signing on to a computer system. In real
life, one signs on to a ship's company, or to a project, or to
some other group; the metaphor is one of joining an enterprise and
identifying with it by pledging with your signature, but if you weren't
familiar with the details, you wouldn't have a prayer of understanding
what's meant. Metaphors in this case are a part of the cultural context
and serve as much to mark in-group status as to serve a more idealistic
communicational function. And as with all such group-marking phenomena,
there are dialects: in some computing environments you don't sign
on, you log on. Or occasionally in. But that's
sociolinguistics rather than semantics.

My concern here, however, is not so
much in these fascinating twists and turns of jargon as in the metaphors
used with intent to communicate to people who can't be expected to know
the details. Yet. This is a really serious problem, since it's
not clear just what they can be expected to know, and any writing that
is done without having in mind a clearly defined audience with clearly
defined background knowledge is practically impossible to bring off
successfully. As you all know.

Old myths and new

One thing we can take for
granted about our audience when we write or teach is that they are
members of our culture. This isn't, of course, true any more about
audiences in some media, such as the Net, but these are rapidly evolving
their own cultures so they can play the same games. As such, they are
parties to a number of communal jokes we play on one another for various
purposes. One word for such jokes is Myth. A myth is a
species of metaphor that is:

Widely, even universally, known and used in a culture or subculture;

Largely unconscious in nature [possibly because of (1)];

Literally false, or even ludicrous, when spelled out.

Myths can concern anything at all, and I don't really want to go
into the subject too deeply here; as a colleague of mine once suggested,
the grammar of mythology is a bloody business. Let's get back to
computers.

Computers are the subject of plenty
of myths. They are new and therefore scary. Scary things need
explanations; when we have an explanation, a label, we can put the
scariness into a box and feel in control of it. This is a silly way to
behave, of course, but it's pretty human. If computers hadn't been so
damned useful, this wouldn't be a problem; on the other hand, we
wouldn't be here discussing it, either. Since they have had such an
effect on everyone's life, we need to take a look at the metaphors about
them that have taken on the status of myth and to see what effects
they've had.

These myths fall into a number of categories:

Deus Ex Machina

The basic idea here is that
computers, being powerful, mysterious, and omnipresent (and therefore
very threatening), take on some or even all of the classical aspects of
gods or demons. It's sort of a contrary of the well-known explanation
for why dinosaurs are so popular -- they're said to be "Big, Dangerous,
and Extinct", and thus a safe subject to fantasize about. Computers are
big and dangerous, but very far from extinct, so many people feel
threatened by them.

This one had a lot going for it
during the first part of the last half-century, since early computers
were very large and remote, understood by only a few, controllable only
by secret rituals, and ministered to by specially-trained (and -gowned)
people who were already admitted to the mysteries, and governed the
admission of others.

For those in on the secrets, of
course, this is a rather convenient view to encourage in others, and one
often encounters it in large Data Processing enterprises. Likewise, for
those on the outside, this myth makes computing look like a very
dehumanizing activity, and their resistance to computing can take on
religious overtones. Needless to say, all this makes life much more
difficult for those whose job it is to de-mystify computers, since they
can wind up coming across either like heretics or like soulless minions
of the Devil. In short, this myth is not one we should encourage, not
that I think anyone does.

Mathematical Machines

A related issue is the fact that
American culture has never been fond of mathematics; 'Rithmetic is the
last and least of the 3 R's, and the overwhelming majority of our
compatriots have neither interest in nor understanding of anything that
even looks mathematical. It is therefore a very simple matter to
predict what the social response will be to any innovation that is
billed as being primarily mathematical in nature.

To add to this, there is a fairly
common distrust of The Machine in our culture. Some
machines we have to accept, just to get along -- the automobile, the
telephone; some, like TV, seem to be insidiously dangerous; some, like
nuclear weapons, are dangerous in ways that are far worse than
insidious. Despite (or perhaps because of) our dependence on machines,
American culture is resentful of them and often views them as
dehumanizing.

Put these two mythic viewpoints
together, stir in some of the feelings of Deism mentioned above, and you
get a real sense of what computer anxiety is all about. Since computers
really are machines, after all, we don't have much choice about
this. Of course, there are machines and there are machines.

The Pathetic Fallacy

Many of you will be familiar with
this phrase as the classical name for what we now call anthropomorphic
language, that is, the attribution of human qualities to non-human
things. All languages have a vast repertoire of terms that refer only
to humans and their activities, traits, feelings, appearance,
intentions, etc. As a species, we are very narcissistic; human terms
probably constitute at least half of the vocabulary of every language
and are by any standards our favorite topics for discussion and writing.
It's not surprising that we see them everywhere, or that we attribute
them to something like a computer.

It's the social role of a human that
the computer (more correctly, the software on the computer) is expected
to take on. There are lots of varieties of this myth because there are
so many roles for humans: servants, confidantes, secretaries, bosses,
friends, enemies, therapists, etc.

Nevertheless, it's pretty obvious
that expecting a computer to act or react like a human would is asking
for trouble. Not that Eliza or Bob ever shied away from trouble.

Some examples

In this section I want to give you
a short tour of some of the uses of metaphor in computing. Each of the
following is one example (where there are probably hundreds) of views of
computing that are current in American culture. Each of the views (i.e,
metaphor themes) licenses particular kinds of language to talk about
computing and has particular consequences in the culture. Each has its
problems, each its opportunities.

The Servant Problem[THE COMPUTER IS A SERVANT]

Few Americans have ever had
servants; otherwise, this would be a very commonly-remarked phenomenon.
As it happens, having servants is a mixed blessing. To begin with,
servants belong to a different social class from their employers, and in
most areas of the world, that means they speak a different language, or
at least a dialect that can verge on mutual unintelligibility. Nor are
they usually well-educated, nor do they always subscribe to the same
cultural goals and standards as their employers. As a result of all
this, one must often spend at least as much time and effort supervising
a servant doing a task as one would spend doing it oneself. Many times
it's much simpler to do it yourself.

Now put this into a computer
metaphor. Everybody would love a program that was a good servant; but
like even the best of servants, such a program must be instructed on
what to do and how to do it. And if you're still not satisfied with how
the program or the servant does it, too bad. There is only so much any
servant (or any program) can be expected to know or to learn. And now
we come to the kicker -- computer programs are much more limited than
humans, and can typically only do one kind of thing. To do
another, you need a different program (or specialized servant),
and that one can't speak the same language as your other one(s), and
can't learn anything they already know. Therefore you must
painstakingly learn yet another language, and yet another set of
personal (programmatic) idiosyncracies in order to make it work for you.
I won't even mention the effects produced by the American phobia for
languages.

This variety of anthropomorphic
metaphor theme (THE COMPUTER IS A SERVANT) is reinforced
by (among other things) software that uses first person pronouns, by
interfaces that make the user learn a recondite and unchangeable set of
terms for what the software can do, and by overly cute documentation
that personalizes the name of the program.

Running Fast[THE COMPUTER IS A RACE]

Computers appear animated; that is,
things seem to move about and responses to user stimuli can be noticed.
Of the classic criteria of animateness (growth, ingestion, excretion,
and irritability), good analogs exist for computing. In particular, the
speed of computer response is an especially gratifying and very
important phenomenon. I think it is no mistake that we use the
transitive verb run to refer to the execution of computer
programs.

I imagine many of you here are
familiar with the original title of the magazine Dr. Dobbs'
Journal; it used to be "Dr. Dobbs' Journal of Computer
Calisthenics and Orthodontia", with the epigraph "Running Light
Without Overbyte" for those who weren't in on the joke. The English
verb run is intransitive; that is, it doesn't use a direct
object. It can be made into a transitive verb, though, and then it is
causative, i.e, it means cause to run. One of its most common
uses (celebrated in Dr. Dobbs' athletic reference) is in referring to
racing animals, or in idioms like run me ragged, where what is
connoted is the causing of very rapid performance.

In this case, it is control of the
(rapid, animated) behavior of the computer that we're talking about. To
use run is to be a speed freak like those who constantly try to
make everything go faster. Take a moment to reflect on how short a
second is in normal human activities, but how l o n g it is when
you're waiting for a computer response. As computer users, we've become
addicted to instantaneity.

But, to quote one of the maxims from
Kernighan and Plauger's The Elements of Programming Style, it's
important to "Make it right before you make it faster".

Software Tools[THE COMPUTER IS A TOOL]

Another Kernighan and Plauger book,
Software Tools, is said to
have
started a revolution in software
design. Whether true or not, it certainly was a clear case of a
metaphor being used consciously. A MACHINE is a form of
TOOL, and that's an extension of our manipulative ability
-- functionally, something you use in your hand. Tools extend our
ability to apply energy. There are, as we've discovered in the last
couple of millenia, two kinds of energy involved in tools.

One is the obvious physical sense of
power, exemplified in the harder blows of a hammer as compared with a
fist; the other is more subtle, and really refers to information. With
a handsaw, for instance, a user supplies both types of energy
simultaneously; with a power saw, on the other hand, the user supplies
only the controlling energy, the information, while the motor provides
the power; and with an automated drilling machine, both types are
separately powered.

You can probably see where I'm going
from here. It's only a small step from a machine where one kind of
energy is powered as information to one where the metaphor is turned
back on itself and two kinds of information are powered separately.
This leads directly to the distinction between algorithms and data, the
abstraction of which is the core of computer science.

For a technician, this is probably
the most useful and productive metaphor available for computing. For
anyone else, however, there are problems. Awareness of technical details
is a plus for a technician; it's pretty often a hindrance to those
interested only in using the machine for their own purposes. Something
else has to be provided, yet, paradoxically, the technician is almost
certainly the wrong person to decide what it should be.

Car and Driver[THE COMPUTER IS A MACHINE]

As I mentioned, allied to the
TOOL metaphor is the MACHINE metaphor. I've
mentioned the latent Luddite tendencies of many of our fellows; it's
also true, though, that quite a few of us are very fond of machines. The
best (and most locally relevant) example of this is, of course, the
automobile. Our culture has not just accepted automobiles -- we've
embraced them wholeheartedly. So it's tempting to use the metaphor of
driving a car to refer to using a computer. There are some
benefits to this; however, there are even more problems.

The automobile is a machine that you
don't need to understand in order to use. While there are plenty of
people who enjoy tinkering with cars, many more want nothing to do with
such activity. They want to use the car. This they can do,
because functionally a car is simply an extension of a natural activity
(movement) whose operation can be transferred from other learned
activities. In short, you learn a high-level skill associated with the
hardware -- you learn to drive a car. What you do with it is then open
to your own intentions.

Computers don't really work like
this. Learning to run a computer doesn't help you at all in using it;
in fact, it's not clear just what learning to run a computer
might mean. Computers, unlike cars, have software, and the software is
what you wind up using. The appropriate analogy to having a car would
be if you had a programmable car. You would have a program that took
you to the grocery store, another that took you to work, a third one --
that you customized yourself -- that took you to Grandmother's house,
and so on. To go anywhere else (or to take a different route) you'd
have to get a different program or change some installation constant.
And it goes without saying that each program would behave differently,
use different controls, display different messages, and so on. If cars
really were the same kind of machines as computers, we'd never use them.

The Desktop[THE COMPUTER IS A WORKPLACE]

We're all familiar with the
Macintosh Desktop and its origins in the Xerox Star and its copies in
Windows, etc. And we've all had lots of discussions about how great an
advance it is in user interface design (whether we believe that or not,
there are enough folks who do to involve us in such discussions almost
endlessly). You have to admire that kind of enthusiasm, and the
products that evoke it. Nevertheless, we've not yet arrived at the
perfect user interface.

I'm not going to get involved in a
debate on this here; I just want to point out a few things that I think
aren't quite perfectly realized yet and see what you think. To begin
with, the designers of the Mac were quite correct that many people seem
to think visually and would welcome a visually-oriented interface. If
the graphics quality isn't quite high enough to distinguish some of
those icons, well, it'll get there, right? But there are others who
don't think quite so visually, and who find the specificity of icons
somewhat confining. The single fact that is true of all computer users
(as of all human beings) is that they're all different.

Even (perhaps especially) when
dealing with people who are highly visually-oriented, there is the
problem of information accessibility. I, for instance, happen to be a
person who likes to have information resources visible; so I clutter my
desk with things I refer to often, leave books I'm reading in places
where I'll see them, post notices to myself, and so on. On any visual
interface, you don't really get to see the information; you get to see
the labels, and you must remember what's what. My Mac or Windows
desktop doesn't really look like my real desktop, and it isn't nearly as
useful. I doubt I'm the only person for whom this is true.

What's really at issue here is what
I call density. Not the physical concept of the same
name, but a metaphorical one (what else?) dealing with access to
information. I happen to like information to feel dense, like there's a
lot of stuff in there (wherever there may be in the metaphor, not
to mention stuff), like I can just reach in and grab it; which is
why I like to see a lot of things at the same time, and have them
interconnected if at all possible, whether or not they're logically or
conventionally linked together. To a certain extent, this reflects how I
think my mind works. Others prefer their information less dense, with
fewer high-level nodes and less clutter overall, which probably reflects
how they think their minds work. And for still others, this
doesn't seem to be an issue at all. But there certainly is an enormous
variability in how people deal with information density and access, and
with how they externalize this in their interactions with computers.

I don't really see a great deal that
can be done about it, in fact, beyond making user interfaces as
customizable and flexible as possible, and using a lot of
synonyms when designing them. The point I want to make here is that
diversity in personal styles of information management is not yet a
well-known or -handled part of user interface design. There's always a
big problem with adaptation; either you have to adapt yourself to the
design of the computer (and you may not be able to do so usefully), or
you have to adapt the computer to your own strategies (and this is a
very difficult task at best). Mostly we try to do both, with quite
variable degrees of success.

File Systems[THE COMPUTER IS A FILING CABINET]

The name file that is used for the
most commonly used artifact of software is another thing that people
have to get used to. The metaphor here is that of a business office,
with a filing cabinet full of folders, each containing some kind of
information, each with some kind of label.

This is very misleading, though it's
too late to do much about it, since the term is too firmly entrenched in
technical jargon. The problem is that real files all hold the same kind
of thing (legible papers), while computer files can hold anything at
all, much of which isn't legible at all, at least by humans. The idea
of putting executable code (for example) into a file is obvious enough,
once you know something about computing; however, it's anything but
obvious at first.

Even after you get over this
hassle, though, you have to learn (usually the hard way) about file
formats, and about the hassles of trying to get information from one
kind to another. The filing cabinet metaphor gets stretched too thin to
be of use here; in fact, when I started looking for examples, I came on
this one with a shock of recognition -- it's been ages since I thought
about computer files as having anything to do with filing cabinets.

Fun and Games[THE COMPUTER IS A TOY]

The phenomenon of computer games
is an unlooked-for one, and in my opinion one of the most sanguine
examples of the serendipity we all expect to find in the information
revolution. The first really successful personal computer company was
Atari; of course, the Pong boxes weren't as neat as the Apple ][, but they
paved the way for it, and significantly influenced its design. Not to
mention the design of its successors. What would the Mac be like (would
there even be a Mac, or an Apple?) if there had been no computer games?
Looked at in this light, you can see the development of SpaceWar
(the first visual computer game) at the MIT AI Lab as, in fact, a
monumental advance in computer science.

The reason I'm so optimistic about
games and their ilk is that (as we all know) games are fun. Now we also
know that computers are fun, but this is for some people a difficult
proposition to swallow; even these folks, however, know that games are
different. By definition, they're fun. And fun is precisely what we
all need; by which subversive remark I mean that the protean promise of
computing will never be kept unless the kind of enthusiasm and
creativity we're willing to put into fun activities like games is
routinely harnessed.

We have a sort of problem here. The
market forces driving the development of hardware and software are
oriented to the world-view of business and conservative institutions,
where Things Must Be Taken Seriously. This leads to speed, which we all
welcome, of course, and sometimes efficiency, which has its place, but
only rarely to fun, and therefore only rarely to real creativity.

For instance, few of us are
interested in yet another wordprocessor. On the other hand, I've often
wondered what it would be like to do writing -- or, more interestingly, to
learn writing -- on a word processor that had sound effects -- real
bells and whistles. Pop! when you delete a word,
Zzip! when you delete a line, and so on -- I leave the
remainder of the design as an exercise for your imagination. The point is
not that it might be more efficient, but that it might be more fun. And it
might be a good idea to encourage fun in computing, just as it is to
encourage it in education. Serious Business is good business, all right,
but for repeat customers, fun sells better.

One of the most hopeful signs I've
seen in the computer world is the sense of humor that's evident
everywhere. April 1st is the most important holiday on the calendar of
the computer culture; I hope we keep it that way for a long time.

Conclusion

I've mentioned a number of metaphor
themes that we use to approach computers as things and computing as
activity. Some of these have Mythic status -- that is, they developed
on their own, in the "cultural unconscious", and we have to deal with
them, willy-nilly. Others are more or less conscious choices, made for
particular reasons in particular contexts. There are still others that
we haven't mentioned.

I want to switch here from talking
so much about the language used about computing to return to my own
favorite metaphor theme, which could be roughly stated as
COMPUTING IS A LINGUISTIC ACTIVITY. This is the other
side of the coin, so to speak. There are plenty of things about
computers for which at least some of the metaphor themes I've mentioned
are not only appropriate, but productive; however, for one of the most
important, I have some hopes for this one.

The area of computing that (I
think) everyone agrees needs the most work is user interface design.
Progress in this area has led to such advances as the Mac interface; but
there's more to the story. What it's all about is communication, and,
while communication is not simply a matter of language, nevertheless,
human language is the principal phenomenon of human communication. There
are things that are known about it. These have been used in such areas of
research as Natural
Language Processing and Artificial Intelligence, but I want to turn it
around and see what insights can be gained from looking at human
interaction with computers as a linguistic process.

To begin with, it is natural
enough to view it this way, since keyboards are derived from typewriters,
and those are used to produce written language. A naïve computer
user will automatically use prior experience in typing and attempt to
apply it to the task at hand. Early user interfaces that were oriented to
a command line,
in fact, explicitly attempted to use this, by making the command itself
into a structure of Imperative Verb plus Direct Object, something derived
directly from English grammar. This was an instantiation of the
SERVANT metaphor theme, and the imperative is the form used
to give orders.

I could go into a lot of linguistic
detail about the grammar of such commands; other forms that can appear
are directly analogous to cases -- for instance, the DOS/Unix
redirection arrows function precisely like the ablative and dative cases
in Indo-European languages. More important, however, is the fact that,
unlike most servants, most computer programs don't have much of any
facility for what we call repair procedures -- i.e, what happens when
we misspeak or see that we are misunderstood. It's here that the most
important limitation of the principal myth about language in our culture
comes out.

This myth is called the
CONDUIT METAPHOR, and it is particularly easy to see in
distinguishing spoken from written language. In spoken language, we
appear to be understanding a person through what they say; in written
language, on the other hand, we appear to be dealing with the words
themselves, and the literal meaning (the word literal itself simply
means 'written') becomes a matter of very great importance. If we make a
mistake in conversation, we can back up, restate, ask questions, pause,
look dumb, or behave in a lot of different ways that can lead to
clarification, rather like an elaborate error-trapping routine.

In written language, however, we
have much less to go on, and have consequently developed conventions for
interpreting the writer's intentions. The Conduit Metaphor, which is a
myth that is used to explain how we can communicate, even though we're
not telepathic, supports the views:

that words and meaning are physical objects of the same type

that (literal) meanings are attached to words

(the metaphoric
attachment is that the meaning is inside the words -- note the
use of such phrases as in a few words, empty words, full of meaning,
and the like)

that communication is a matter of
shipping the word strings over to the listener and having them unpack
them.

This is a pretty silly theory,
particularly when applied to natural spoken language, but it has a
certain utility in its application to writing. Written words are
physical, after all, and written communication is a matter of
exchanging strings of words. Since two of the three parts of the
Conduit Metaphor seem to work well, it's a simple matter to assume the
third -- that meaning is attached tightly to words (in fact,
inside them), and that therefore any difficulties in
understanding are due to the writer's improper use of words.

Returning to computers, we see that
the natural equation of computer interaction with written communication,
coupled with the equally natural acceptance of the Conduit Metaphor to
explain the functioning of written language, has led quite naturally to
a situation in which computer software is doomed to behavioral flaws in
its interaction with humans, since it is operating on the flawed
assumption that meaning (which would really be better termed
intention) is a literal matter. This isn't something that we can
do a great deal about, of course, given the difficulties inherent in
determining intentions and of making workable programs in the first
place.

Some solutions may be in the
offing, and thinking about them is interesting. For instance, while speech recognition is very
difficult and isn't going to become widespread soon, it offers some
possibilities for getting away from the written language metaphor. It is
startling to think that within a few decades, typing may be as rare a
skill as knapping a flint arrowhead, driving a coach-and-four, shooting a
flintlock musket, or solving an equation on a slide rule. We may be
living in what might come to be called the Written Input Era; like
the Vacuum Tube Era, one of the fascinating sidelights of
technological history. Before we all learned how language really
works.

Of course, speech recognition
doesn't provide the whole story; there is plenty of hard work to do in
determining just exactly what intentions people can have in using
computers. In our contact with new users, we've all come upon some
strange ones. What I want to suggest today is that these conceptions
aren't really as strange as we might consider them. They're normal
ideas, based on normal expectations, which happen not to be met by
currently normal hardware and software. We have little choice right now
except to try to adapt the users to the machine; but the future holds
some hope of being able to adapt the machines to the users.

Provided, that is, we don't become
wedded to our myths and embedded in our current metaphors. Some people
(me among them) are often accused of Mixing Metapors.
This is supposed to be a bad thing. I'll admit it can be a bit
confusing, but I really think it's our only hope. The more different
views you have of something -- and the more different the views are --
the more hope you have of understanding what the thing is really like.
Of perceiving some aspect of its reality that isn't apparent in any of
the individual views.

The best metaphor I know of to
explain this is the phenomenon of binocular vision, or stereo sound. We
have two eyes and two ears, even though each one of them works fine
alone. The other one isn't just a spare, though, because using them in
parallel provides information about what is being perceived that isn't
carried in either of the separate images. We perceive depth in visual
or aural signals precisely to the degree we use separate, different
signals and succeed in integrating them into a single percept. Nobody
really understands what computers are, let alone what they can be; my
suggestion is that we leave it that way, and continue coping with
imperfect metaphors. The more the better. And of course, continue to
have fun.