Phoebe Sengers responds in turn

Whether CTPs should walk on three legs or two; how the robotic
artwork Petit Mal is "interpretationally plastic;" what cultural
assumptions we build into machines: just some of the
response-topics here.

(To Lucy Suchman)

Yes, life is not a story, and therefore, by extension, the
Patient is not alive. The lesson from the critique from
anti-psychiatry and narrative psychology for Artificial
Intelligence is the danger of assuming that life can be cleanly
understood and implemented, represented, as Lucy Suchman says, by
an "omniscient and regulatory `designer'-creator." This is not to
say we cannot tell stories of life, but that those stories must
always be traceable to the one who is doing the telling, who is
accountable for what those stories say. In the nightmare world of
the Patient, therefore, all trappings of the omniscient nature
documentary have been removed; the Patient is a cartoon, the
Patient is fantastic, the Patient was clearly built by someone with
a goal in its telling, and the Industrial Graveyard and its
user-unfriendly interface are designed to make this clear. The
Patient is, in the tradition of literature and film, enacting a
story; it is, in fact, my story [Sengers, 1995], which may throw a
new light on the confusion that Suchman feels!

At the same time, one must ask whether the construction of the
Patient fully allows for the flexibility, multiplicity, and
negotiability of narrative. The answer to that is no. One of the
major shortcomings of the system is that I serve up prepackaged
narrative without much leeway for audience interpretation. I decide
on all the behaviors and transitions ahead of time, and then the
goal of the system is simply to make sure that those decisions make
it across the yawning divide to the user intact.

A different approach that is friendlier to the value of
negotiability is that taken by Simon Penny in his robotic artwork,
Petit Mal [Penny 2000]. The design
of Petit Mal explores the extent to
which people can attribute meaningful behavior to autonomous
robots. Petit Mal is set up, not to
elicit any particular behavioral interpretation, but to allow for
many possible behavioral interpretations. Far from trying to impose
particular interpretations on the user, Penny uses Petit Mal as a blank screen onto which many
possible interpretations can be projected. Petit Mal is interpretationally plastic, and
never exhausted by the onlooker's musings; this gives its dynamics
a degree of liveliness that the Patient lacks.

The difficulty with this plasticity is that it is relatively
low-level. At the internal level, Petit
Mal does some simple navigation and obstacle avoidance
(which is, of course, regularly interpreted as much more complex
behavior). It is not clear how much more complex behaviors can be
constructed for Petit Mal without
simultaneously greatly constraining the interpretational space. In
this sense, Petit Mal and the
Patient occupy more or less opposite ends of the spectrum of
interpretational negotiability on the one hand and understandable
complexity on the other. If this is so, it might be interesting to
now try working towards something in the middle.

Still, there is something deeper behind the strain which Suchman
identifies: it is a strain between the disciplines, the question of
whether intellectual labor in the field of cultural critique can be
made meaningful within the tradition of agent-building practice and
vice versa. Following Suchman's admonition might mean to drive the
project largely by the cultural critique involved, to take an
outsider perspective on agent-building and to suggest that current
agent-building traditions carry so much negative cultural baggage
that the only hope is a radical new practice and a complete break
with tradition. Although such critique can be useful, I believe it
is essential, at least some of the time, to be willing to enter the
looking-glass world of agent-building with one's cultural armory in
tow, and to be able to relate the insights of cultural analysis to
"that world's characters, problems, projects and prospects."
Without this willingness, one cannot expect those involved in
agent-building traditions to listen to and learn from the critical
perspective. The cost of this intellectual border-crossing is that
there is not one neat, coherent perspective from which the world
can be viewed. This costs the reader effort, as Suchman has noted.
Integrating two such widely divergent worldviews and conversations
is likely possible only on a contingent basis, driven by particular
problems and projects. Nevertheless, I think it is an effort well
worth taking.

(To Michael Mateas)

Philip Agre's formulation of critical technical practices opens
a rich space of possibility by stating clearly that there is not a
single form of critical technical practice, but multiple possible
such practices. In this context, the questions Michael Mateas
raises about the relationships among different critical technical
practices - what properties they share, what they can learn from
each other, in what ways they are, perhaps, incommensurable - are
essential to ask in order to understand where this field is going.
I thank him for the opportunity to address these issues. Here, I
have organized my answers to his questions according to two
rubrics: the nature of critical technical practices in general, and
the function of socially situated AI as a specifically
agent-oriented practice in particular.

The nature of critical technical practices

In order to be able to talk about the relationship between
Socially Situated AI (SS-AI) and Expressive AI (E-AI), we need to
understand how they each relate to Agre's original notion of
critical technical practices (CTP). In Agre's formulation, in a
CTP, problems are encountered at a technical level, then understood
and addressed as philosophical problems, and finally resolved at a
technical level on the basis of philosophical insights. "The
point... is to expand technical practice in such a way that the
relevance of philosophical critique becomes evident as a technical matter " (Agre, p. xiii).
Important to note here is the primacy of the technical over the
critical in a critical technical practice. One must first start
with a technical problem, then one can take a critical or
philosophical approach, by which one finds a technical
solution.

This is, in fact, true of SS-AI, at least the work on expressive
agent architecture which I describe in First Person. It is not, however, true of
Expressive AI. In Expressive AI, the opposite situation holds: the
technical problems that the artist chooses to tackle are a
consequence of the artist's vision of what it is he or she would
like to communicate. It is, of course, tempting at this point to
crow about the superior authenticity of SS-AI in this respect, but,
in fact, Mateas's work is a good example of why, with apologies to
Agre, I believe it is unfruitful to take the primacy of technical
work as essential for CTPs as currently practiced.

Although Agre's work in the area derived from work in AI
practice, and he sees critical technical practices as first and
foremost a variation of a technical practice, what seems
constitutive about it is not that the problems come from technical
work originally, but that work proceeds simultaneously or
alternately from a technical and a philosophical perspective. If,
indeed, in a CTP technical and philosophical problems are seen as
two sides of the same coin, then there is no a priori reason why the technical problem
should come first; one can instead start with a philosophical
problem, which then becomes instantiated as a technical problem.
Wardrip-Fruin and Moss make a similar
point. And with nominally digital art practices (such as
those of Simon Penny and Chris Csikszentmihalyi) and nominally
design practices (such as those of Bill Gaver and Tony Dunne)
approaching asymptotically the work of more officially technical
practices (such as those of Mateas and myself), it seems
counterproductive to stick with the primacy of technical work as a
defining characteristic. The important point from my perspective is
that the CTP make coherent sense as one practice both from a
technical and from a philosophical or critical perspective.

In addition to this requirement, Mateas argues that there needs
to be at least three disciplines involved in a CTP. I am not sure
if I disagree about the principle or about the definition of what
it takes to be a discipline. My own practice as described here may
stand on the three legs of AI, cultural theory, and narrative
psychology, but if that is the case it is a very wobbly practice,
as the leg of narrative psychology is much shorter than the other
two! I see narrative psychology as simply one part of my cultural
theory practice; one can see this truly as three legs only if the
critical perspective is separated from the rest of the cultural
studies content, which seems artificial, but might be useful for
thinking about how CTPs function in general. Instead, I propose the
following formulation: that a CTP can be the synthesis of a
technical and any other form of practice, where the second
practice, like art or cultural studies, has a critical perspective
as an essential component, and where that critical perspective is
brought to bear on the technical work (this is not intended to rule
out three-pronged structures, or a minimalist CTP with only a
technical and `pure' critical perspective at work, though it does
seem likely that critiquing technical approaches without having
another discipline to draw on for a concrete alternative would be
difficult).

The primacy of agency

As Mateas points out, SS-AI is limited in its definition to
agents - perhaps a better name would be "Socially Situated Agent
Design." "Agent" is not intended here as a catch-all term for all
of AI. I do not see the focus on agents as an inherent limitation;
rather, I see SS-AI as a case study in a particular domain, from
which analogies could be drawn to other areas of AI. At the same
time, agency is a particularly important case study for the
principles of SS-AI, because it is arguably the part of AI where
the postulates are most likely thought not to hold by
practitioners. Agents, unlike other AI and non-AI systems, are
theorized as fully autonomous, i.e. as existing independently of
their creators and audience. As such, agents form an extreme case;
it follows that other AI systems might be more easily socially
contextualized.

Yes, the postulates of SS-AI hold if we replace "agent" with "AI
System." In fact, the postulates still hold if we replace "agent"
with "computational system" - in which case we end up with the
postulates of Human-Computer Interaction. In this sense, SS-AI is
simply stating that what HCI practitioners have known all along
should also be the perspective of agent builders.

SS-AI is one example of a CTP, not a definition of what a CTP in
AI must look like. In my more recent work, I have been adapting my
experience with SS-AI to other domains. My work in the area of
avatars, or agents which are intended to represent human users, has
similarly been based on recontextualizing the nature of autonomy in
the human-avatar relationship [Penny, et al.]. My current work in
the area of Ecological Media, or smart appliances which support
awareness of the environment in day-to-day activity, is not focused
on agents at all, but rather on changes in perception that
interactive media can bring about. In working in these areas, the
following postulates generalized from SS-AI have proven repeatedly
useful:

1. Systems need to be evaluated not only
within the technical frame within which they have been defined, but
also with respect to the social and cultural environment that
shapes the system and which the system affects.

2. Systems encode culturally specific
assumptions. These assumptions have material effects in the
behavior of the system and are experienced by the user of the
system.

3. Because of this, responsible system
design involves careful selection of the cultural assumptions built
into the machine.

Wardrip-Fruin, Noah and Brion Moss. "The Impermanence Agent:
Project and Context." Cybertext Yearbook
2001, ed. Markku Eskelinen and Raine Koskimaa. Saarijärvi:
Publications of the Research Centre for Contemporary Culture,
University of Jyväskylä, to appear.