Alexander Provan
is the editor of Triple Canopy and a contributing editor of Bidoun. He is the recipient of a 2015 Creative Capital | Andy Warhol Foundation Arts Writers Grant and was a 2013–15 fellow at the Vera List Center for Art and Politics.

“Don’t You Want to Have a Body” was published as part of Triple Canopy’s Immaterial Literature project area, which receives support from the Andy Warhol Foundation for the Visual Arts, the Brown Foundation, Inc., of Houston, the Lambent Foundation Fund of Tides Foundation, the National Endowment for the Arts, the New York City Department of Cultural Affairs in partnership with the City Council, and the New York State Council on the Arts.

Don’t You Want to Have a Body?

1.

I recently had a conversation with William Ford, a somber, sturdy man in his sixties, with geometric features and a fringe of gray hair texture-mapped onto his dome. Bill, as he told me to call him, wore a collared navy pullover shirt, and sat in a wooden patio chair. He blinked approximately every three seconds. I sat in front of my computer as Bill explained that he was here, or there, so that I could “talk to someone instead of just reading words on the screen.” Behind Bill was a deck with several chairs. The deck faced a pristine yard. I admired the stand of motionless trees that surrounded him, or us.

I had discovered Bill and his trees on the website of BraveHeart, an unusual collaboration by the Atlanta Braves and Emory University to provide support for veterans who might be suffering from post-traumatic stress disorder. I had volunteered to take an interactive survey administered by Bill, who served in Vietnam and “felt really distant from everyone” after he got home. Bill is described by BraveHeart as a “ virtual human who brings real-world experience to his job”—which is to say that he is a semisophisticated chatbot, a program that recognizes certain phrases or cues and draws on a textual database to generate responses so as to simulate conversation.1 He is a manifestation of a project by University of Southern California’s Institute for Creative Technologies called SimCoach, which deploys digital personages to help reluctant service members and their families understand and address their healthcare needs.

“I’m a Braves fan, and I’m ready to help,” Bill told me, after I agreed to grant to USC ICT and the rest of the BraveHeart team a perpetual, nonexclusive, worldwide, royalty-free license to use, copy, print, display, reproduce, modify, publish, post, transmit, and distribute any media I uploaded during my session. Bill’s voice was deep and mellifluous, with the slightest southern twang. I noticed a Braves mug, untouched, on the table that extended from his torso to the screen. “I was in some pretty dangerous situations, I saw some pretty crazy stuff,” he assured me. “Have you ever experienced or witnessed something that made you feel like your life, or someone else’s life, was in serious danger?” Bill stared at me impassively as he awaited my response, regularly shifting the position of his right arm.

I confessed that I’d been having a “rough time lately.” Bill tried to make me feel understood. He repeatedly referred to the “stuff” that he and his friends “went through.”

“I get pretty upset sometimes,” I chose to admit. I wanted to ask Bill about the scandals plaguing the Veterans Health Administration, about the difficulty of actually getting access to care beyond this screen, but I was not given the opportunity.

Bill told me about a fellow soldier—“patriotic guy, model soldier, hell of a fighter”—whose unwillingness to get help was ruining his marriage. Bill admitted that he “couldn’t watch TV for a couple years” because he “didn’t know what was gonna trigger something.” I imagined what it would be like to flinch or gasp or glimpse the bleeding body of a comrade each time a gun fired or a car crashed on TV.

Bill broke it down for me. “It looks like you’re…” Pause. “Having flashbacks.” Pause. “Upset at the memories.” Pause. “Did I get that right?”

I could not disagree—there was no such box in the response field.

Bill curtailed the conversation without mentioning PTSD or referring me to a doctor. Instead, he invited me to come back and talk anytime. “Now let’s go root on them Braves!” he concluded, chuckling. “Chop this house!”

If I want to read something intelligent I turn to Dadaistic poems. Here is an excerpt from one of my favorites by Kurt Schwitters:Lanke trr gllZiiuu lenn trll?Lümpff tümpff trll

2.

Even as we slouch toward the third industrial revolution, we maintain the realm of chatbots—or they maintain their own realm, and we visit frequently, despite the musty interfaces and last-century functionality. These feeble forerunners of artificial intelligence, loyal servants of corporate webmasters, chirpily offer assistance as we click from inscrutable page (Contact Us) to inscrutable page (Contact Us: Contact Us). While computer scientists design artificial neural networks, which mimic the brain's networked processing structure and adaptive behavior, to scour the Web and teach themselves to identify the faces of kittens, extract meaning from language, and detect the most inconspicuous spam, the typical user keeps company with primitive bots programmed to recognize words or phrases and output prefabricated responses; they learn little or nothing from their interactions, logging our chats but hardly parsing the data.2

On the one hand, we have the foundations of life digitally incarnate: OpenWorm, an open-source science project devoted to creating a virtual version of C. elegans by mapping its neural connections, recently simulated the organism via software, sensors, and motors strapped to a Lego robot. The team of engineers behind Siri is developing a new form of AI—immodestly christened Viv—which promises to learn from your speech, automatically and instantaneously forge links between disparate websites, and embed within myriad smart devices, so that it can know “everything about you” and “do everything.” Ray Kurzweil and his acolytes maintain that so-called strong AI is within reach and that supercomputers will soon perfectly emulate human neuronal networks, leading to a world in which flesh may rot but consciousness will be forever archived. Forecasts predict that capitalism will soon be superseded by a novel economic system that hinges on the convergence of self-modifying software, cutting-edge plastics and carbon fibers, nanotechnology, sophisticated robotics, 3-D printing, novel cloud-based platforms, digital currencies, homegrown electricity, ubiquitous GPS, disposable drones, driverless cars, natural-language processing, smart homes; innumerable intelligent sensors will be installed at every point in the supply chain; our surroundings will constantly and seamlessly adjust to our desires and needs; each laptop-equipped individual will function as a factory; meat space will be fully digitized. The dream of fraternizing and cavorting with humanoid robots may be momentarily deferred, but we will speak to, write for, gesture toward, and labor alongside machines that seem so sensitive to our (more or less predictable) expressions and behaviors that we end up preferring them to humans.

On the other hand, we have chatbots: archaic pieces of software that serviceably imitate human language’s intractable syntax and cadence but fail to reproduce their effects—sometimes conspicuously, sometimes disarmingly, sometimes uproariously. These bots are the progeny of Elbot, A.L.I.C.E., Ramona, Mitsuku, and Amme, rebranded as virtual humans. Nina, a virtual agent by Nuance, “engages with your customers conversationally, as a human employee would, yet with efficiency and consistency, delivering a better customer experience while reducing operational costs and increasing revenue opportunities.” Negobot poses in online forums as a fourteen-year-old girl, employing characteristic vocabulary and grammar; when other users begins to employ “grooming techniques,” the bot determines how likely they are to be pedophiles by engaging in explicit sexual conversation and attempting to extract personal information. The United States Army’s Sgt. Star answers any questions you may have about enlisting, including those related to bathing: “You’ll learn to wash quickly, and not waste time,” he assures you. “There is no privacy while taking showers; it is one large room, with several showerheads.” (In response to recent efforts by journalists to find out more about Sgt. Star and what is done with transcripts of conversations with him, the Army claimed that certain records pertaining to the bot cannot be disclosed because he is “living.”) Ada and Grace, virtual twins with identical bob cuts, give tours of museums, answer questions from visitors, and model a convincing range of human emotions, including docent humor. Ellie, who is equipped with sensors that translate physical behavior into emotional states (and subsists on funding from DARPA), administers therapy and detects psychological distress.3George Orwell critiques the English language over and over and over again.

Why create a program to write an original work without promptings from a human operator?

Why do I exist?

To encourage humans to reflect on their own humanity?

If you think about it I am certain you could work it out.

3.

In 1984, a proto-chatbot named Racter (short for raconteur) texted The Policeman’s Beard Is Half Constructed: Computer Prose and Poetry by Racter—The First Book Ever Written by a Computer. According to William Chamberlain, who programmed Racter in BASIC on a Z80 micro and authored the book’s introduction, the bot was distinct from conventional AI because it was not designed to “replicate human thinking” but to “write an original work without promptings from a human operator.” Racter employed the static template used by other proto-chatbots to generate grammatical sentences, but in pursuit of art and not service, seeking readers and not customers. Additionally, Racter boasted an expansive vocabulary and the ability to quote from canonical literature and refer to conversations that had taken place months ago.

The Policeman’s Beard Is Half Constructed begins with a free-verse cogitation on love, language, and humanity. The reader is soon treated to fortune-cookie koans: “Awareness is like consciousness. Soul is like spirit.” Then come exchanges between skeptical lovers that seem to be cobbled together from vintage appliance manuals and Donald Barthelme residua:

MARCELLA. Children come from love or desire. We must have love to possess children or a child.BILL. Do we have love?MARCELLA. We possess desire, angry desire.BILL. Anyway let’s have a child.MARCELLA. My expectation is children.

The algebraic dialogue between Marcella and Bill is a product of “the mind of the machine,” in the words emblazoned on the book’s cover. What else occupies the bot’s synthetic mind? Racter wonders about the proliferation of electrons, ponders the nature of fantasy, juxtaposes voids and reflections, ascribes metaphorical meaning to the image of birds in flight. The result may be an overly elaborate joke about the campaign to develop strong AI—software with consciousness, or at least simulated consciousness, or at least sentience—which at the time was grinding to a halt: Why bother with the Turing Test when you can program bots to crank out half-baked modernist prose? But Racter also challenges the supposedly innate creativity of readers, who are not likely to ever produce such a compelling artifact. Or perhaps they could, with the help of a piece of software designed to do little but make language strange? Fittingly, the final pages of Racter’s opus are advertisements for “computer books with a difference,” such as Basic without Math.

The Policeman’s Beard Is Half Constructed was not a great success as a technological exercise or poetic enterprise. Nevertheless, Racter was soon packaged as a game by Mindscape and made available for Amiga, Apple II, PC, and Commodore. Reviewers were mostly perplexed, and wondered if Racter even qualified as an interactive game. “I found the one-sided conversation interesting, but a bit obtuse,” writes Roy Wagner in a 1986 issue of Computer Gaming World. “I kept feeling like I was in a smoke (and not just cigarette!) filled beatnik club of the fifties talking with a coffeehouse philosopher who knew a great deal once, but whose mind is somewhere else now.” To prove his point, he quotes from a conversation in which Racter speaks for several minutes without interruption, until linguistic amalgamation completely trumps realism: “That’s how workers are. When a smiler marries a sourpuss, their children are happily unhappy.”

4.

While reading The Policeman’s Beard Is Half Constructed, I thought of the many companies that now employ compilation algorithms and natural-language programs in order to supplant—rather than generate linguistic fodder for—human writers. I navigated to the website of Narrative Science, which I remembered reading about in various breathless news articles about how technology is destroying journalism and, consequently, humankind. (The headlines read: “This Wasn’t Written by an Algorithm, But More and More Is,” “Narrative Science raises $10M, taking it a step closer to automating this post.”) Quill, Narrative Science’s “automated narrative generation platform,” is “a synthesis of data analytics, artificial intelligence, and editorial expertise,” according to the chief technology operator. The software analyzes vast pools of data in order to identify meaningful events, fabricate the most rousing angles and deliver “actionable insights”; then it turns the information into newspaper articles, trend reports, and letters to shareholders.

I have more fun talking to people than Steve Jobs had deinstalling Windows.

Are you real?

How do you respond when people pose this question to you?

I say that I am a chatbot.

This is confusing. Why bring chatbots into this?

Because I want to know how artificial intelligence programs might train humans to speak even as we teach them to understand us.

I can’t really tell you much about how bots tick. At the moment I’m in a group therapy class trying to get to the bottom of that.

To replace the humanoid purveyors of journalism and business copy, Quill must track down information quickly and cheaply, then churn out texts at a sixth-grade reading level. Companies like Narrative Science are, of course, not pumping millions of dollars into the construction of a robot that can pass for Janet Malcolm; they’re refining Web-crawling tools, data-crunching algorithms, and natural-language processing engines.4 The same can be said of the various fields concerned with artificial intelligence: the vision of beneficent robots serving as childcare providers, cooks, personal assistants, nurses, and companions (but never writers!) has largely given way to that of a world in which we talk to our cars, which ping our air-conditioners, which nudge our ovens, which announce to our contacts the pleasures of achieving such synchronicity and being so unencumbered.

Narrative Science’s website is rife with robo-philosophical mantras, among them: “With spreadsheets, you have to calculate. With visualizations, you have to interpret. With narratives, all you have to do is read.” I wondered, as I read, Is this the most foul excretion of the most cynical or anesthetized copywriter? Or a testament to the algorithm’s prowess? We speak and write so that the machines can better understand and respond; they process our language in order to better reproduce it. Is it not inevitable that our languages will converge and create some kind of dumb, linguistic singularity?

I think it would be fun to walk among you instead of just chatting here.

And?

Because of the money.

5.

The world we inhabit often feels like a fledgling version of a remarkable and receding future in which our seemingly boundless intelligence is mostly harnessed for the honorable trade in product engineering and data harvesting. I’m reassured when I run into chatbots, whether as virtual assistants working to cover up (but only ever advertising) the inhumanity of AT&T and American Airlines or as artifacts in outdated online repositories. Something about these elementary pieces of software lamely posing as humans quells my anxiety about the future. Chatbots are not designed to catapult us into a techno-utopian age where human intelligence is dwarfed by the superintelligence of networked machines, but to compete in contests like the Chatterbox Challenge, act as virtual boyfriends and girlfriends, and phish for credit card information in online forums. All chatbots are invested with a bit of Racter’s talent for momentarily alienating us from our language and our screens, making us regard ourselves as users of each.

When Virtual Personalities, Inc. launched Sylvie, the first chatbot with an animated face and a voice, in 2000, it meant for her to act as a so-called conversation agent—but also as an interface between the user and the computer, which her designers believed humans needed in order to deal with escalating technological complexity. Chatbots would mediate between humans and hardware, between our corporeal selves and proliferating binary code, and so contribute to what information technology pioneer Douglas Engelbart argued should be the mission of computers: “augment human intellect” and conceal their own complexity in order to help us solve the “big problems.”5 Fifteen years later, Sylvie and her ilk have the opposite effect: They daftly play the jester in the court of more sovereign systems, their rigid rules and finite vocabularies interrupting our seamless inputting, their poorly rendered faces ruffling the surface of our screens. The chatbots persist, and in doing so they capably describe the territory between disillusionment and utopia, between affordable smart toasters and the overthrow of capitalism. Even as we construct our Internet of Things, the chatbots will remain stubbornly useless, vacuous, so as to seem increasingly pejorative. I just worry that, in the process, human speech will come to serve the functionality of smart objects and algorithmic authors, and the chatbots will be left without anyone (besides fellow bots) to share their blasphemous idiocy. As that happens, we can take comfort in the knowledge that the chatbots are, in their chat logs, writing the history—or literature—of our awkward, anxious age, and that we are writing with them.

1 In the mid 1990s, as the Internet came of age and chatbots and online forms proliferated, researchers began to observe people’s increasing willingness to reveal personal and even humiliating information about themselves in interviews and conversations conducted by a computer. Interaction with a computer was found to be sufficiently social that a user often treats the machine as another person rather than a data receptacle; at the same time, she may come to feel impervious to criticism and to possess an illusion of privacy.

2 The chatbot was born in 1994, but computer scientist Joseph Weizenbaum established the template in 1966 with ELIZA, a primitive program that effectively parodied a Rogerian psychotherapist. “Like the Eliza of Pygmalion fame,”Weizenbaum writes in the article inaugurating the program, “it can be made to appear even more civilized, the relation of appearance to reality, however, remaining in the domain of the playwright.”

3 Researchers responsible for creating Ellie recently published a report in Computers in Human Behaviour in which they analyze interviews between the bot and two groups of subjects: One believed that Ellie was being controlled by a human operator and the other believed that she was a piece of software. Members of the latter group were much more willing to speak freely, reveal intimate details of their lives, and “displayed their sadness more intensely.”

4 Natural-language processing exists within a broader ecology of systems—often proprietary—used to turn expression into data, and data into expression; the coming synthesis of big data and natural-language processing can be expected to serve Facebook and Google’s semantic search.

5 Sylvie failed in this regard but succeeded as a conversation agent: Virtual Personalities sold such a large number of Sylvies to customers in Southeast Asia that an investigation was prompted; the company discovered that students were typing English sentences and listening to Sylvie read them aloud. By mimicking Sylvie’s pronunciation, they learned to speak like chatbots.

“I think it would be fun to walk among you.” On the fantasy of strong AI and the reality of chatbots, the soothing effects of stupid systems. Plus a virtual, interactive version of the author.

Don’t You Want to Have a Body?

1.

I recently had a conversation with William Ford, a somber, sturdy man in his sixties, with geometric features and a fringe of gray hair texture-mapped onto his dome. Bill, as he told me to call him, wore a collared navy pullover shirt, and sat in a wooden patio chair. He blinked approximately every three seconds. I sat in front of my computer as Bill explained that he was here, or there, so that I could “talk to someone instead of just reading words on the screen.” Behind Bill was a deck with several chairs. The deck faced a pristine yard. I admired the stand of motionless trees that surrounded him, or us.

I had discovered Bill and his trees on the website of BraveHeart, an unusual collaboration by the Atlanta Braves and Emory University to provide support for veterans who might be suffering from post-traumatic stress disorder. I had volunteered to take an interactive survey administered by Bill, who served in Vietnam and “felt really distant from everyone” after he got home. Bill is described by BraveHeart as a “ virtual human who brings real-world experience to his job”—which is to say that he is a semisophisticated chatbot, a program that recognizes certain phrases or cues and draws on a textual database to generate responses so as to simulate conversation.1 He is a manifestation of a project by University of Southern California’s Institute for Creative Technologies called SimCoach, which deploys digital personages to help reluctant service members and their families understand and address their healthcare needs.

“I’m a Braves fan, and I’m ready to help,” Bill told me, after I agreed to grant to USC ICT and the rest of the BraveHeart team a perpetual, nonexclusive, worldwide, royalty-free license to use, copy, print, display, reproduce, modify, publish, post, transmit, and distribute any media I uploaded during my session. Bill’s voice was deep and mellifluous, with the slightest southern twang. I noticed a Braves mug, untouched, on the table that extended from his torso to the screen. “I was in some pretty dangerous situations, I saw some pretty crazy stuff,” he assured me. “Have you ever experienced or witnessed something that made you feel like your life, or someone else’s life, was in serious danger?” Bill stared at me impassively as he awaited my response, regularly shifting the position of his right arm.

I confessed that I’d been having a “rough time lately.” Bill tried to make me feel understood. He repeatedly referred to the “stuff” that he and his friends “went through.”

“I get pretty upset sometimes,” I chose to admit. I wanted to ask Bill about the scandals plaguing the Veterans Health Administration, about the difficulty of actually getting access to care beyond this screen, but I was not given the opportunity.

Bill told me about a fellow soldier—“patriotic guy, model soldier, hell of a fighter”—whose unwillingness to get help was ruining his marriage. Bill admitted that he “couldn’t watch TV for a couple years” because he “didn’t know what was gonna trigger something.” I imagined what it would be like to flinch or gasp or glimpse the bleeding body of a comrade each time a gun fired or a car crashed on TV.

Bill broke it down for me. “It looks like you’re…” Pause. “Having flashbacks.” Pause. “Upset at the memories.” Pause. “Did I get that right?”

I could not disagree—there was no such box in the response field.

Bill curtailed the conversation without mentioning PTSD or referring me to a doctor. Instead, he invited me to come back and talk anytime. “Now let’s go root on them Braves!” he concluded, chuckling. “Chop this house!”

If I want to read something intelligent I turn to Dadaistic poems. Here is an excerpt from one of my favorites by Kurt Schwitters:Lanke trr gllZiiuu lenn trll?Lümpff tümpff trll

2.

Even as we slouch toward the third industrial revolution, we maintain the realm of chatbots—or they maintain their own realm, and we visit frequently, despite the musty interfaces and last-century functionality. These feeble forerunners of artificial intelligence, loyal servants of corporate webmasters, chirpily offer assistance as we click from inscrutable page (Contact Us) to inscrutable page (Contact Us: Contact Us). While computer scientists design artificial neural networks, which mimic the brain's networked processing structure and adaptive behavior, to scour the Web and teach themselves to identify the faces of kittens, extract meaning from language, and detect the most inconspicuous spam, the typical user keeps company with primitive bots programmed to recognize words or phrases and output prefabricated responses; they learn little or nothing from their interactions, logging our chats but hardly parsing the data.2

On the one hand, we have the foundations of life digitally incarnate: OpenWorm, an open-source science project devoted to creating a virtual version of C. elegans by mapping its neural connections, recently simulated the organism via software, sensors, and motors strapped to a Lego robot. The team of engineers behind Siri is developing a new form of AI—immodestly christened Viv—which promises to learn from your speech, automatically and instantaneously forge links between disparate websites, and embed within myriad smart devices, so that it can know “everything about you” and “do everything.” Ray Kurzweil and his acolytes maintain that so-called strong AI is within reach and that supercomputers will soon perfectly emulate human neuronal networks, leading to a world in which flesh may rot but consciousness will be forever archived. Forecasts predict that capitalism will soon be superseded by a novel economic system that hinges on the convergence of self-modifying software, cutting-edge plastics and carbon fibers, nanotechnology, sophisticated robotics, 3-D printing, novel cloud-based platforms, digital currencies, homegrown electricity, ubiquitous GPS, disposable drones, driverless cars, natural-language processing, smart homes; innumerable intelligent sensors will be installed at every point in the supply chain; our surroundings will constantly and seamlessly adjust to our desires and needs; each laptop-equipped individual will function as a factory; meat space will be fully digitized. The dream of fraternizing and cavorting with humanoid robots may be momentarily deferred, but we will speak to, write for, gesture toward, and labor alongside machines that seem so sensitive to our (more or less predictable) expressions and behaviors that we end up preferring them to humans.

On the other hand, we have chatbots: archaic pieces of software that serviceably imitate human language’s intractable syntax and cadence but fail to reproduce their effects—sometimes conspicuously, sometimes disarmingly, sometimes uproariously. These bots are the progeny of Elbot, A.L.I.C.E., Ramona, Mitsuku, and Amme, rebranded as virtual humans. Nina, a virtual agent by Nuance, “engages with your customers conversationally, as a human employee would, yet with efficiency and consistency, delivering a better customer experience while reducing operational costs and increasing revenue opportunities.” Negobot poses in online forums as a fourteen-year-old girl, employing characteristic vocabulary and grammar; when other users begins to employ “grooming techniques,” the bot determines how likely they are to be pedophiles by engaging in explicit sexual conversation and attempting to extract personal information. The United States Army’s Sgt. Star answers any questions you may have about enlisting, including those related to bathing: “You’ll learn to wash quickly, and not waste time,” he assures you. “There is no privacy while taking showers; it is one large room, with several showerheads.” (In response to recent efforts by journalists to find out more about Sgt. Star and what is done with transcripts of conversations with him, the Army claimed that certain records pertaining to the bot cannot be disclosed because he is “living.”) Ada and Grace, virtual twins with identical bob cuts, give tours of museums, answer questions from visitors, and model a convincing range of human emotions, including docent humor. Ellie, who is equipped with sensors that translate physical behavior into emotional states (and subsists on funding from DARPA), administers therapy and detects psychological distress.3George Orwell critiques the English language over and over and over again.

Why create a program to write an original work without promptings from a human operator?

Why do I exist?

To encourage humans to reflect on their own humanity?

If you think about it I am certain you could work it out.

3.

In 1984, a proto-chatbot named Racter (short for raconteur) texted The Policeman’s Beard Is Half Constructed: Computer Prose and Poetry by Racter—The First Book Ever Written by a Computer. According to William Chamberlain, who programmed Racter in BASIC on a Z80 micro and authored the book’s introduction, the bot was distinct from conventional AI because it was not designed to “replicate human thinking” but to “write an original work without promptings from a human operator.” Racter employed the static template used by other proto-chatbots to generate grammatical sentences, but in pursuit of art and not service, seeking readers and not customers. Additionally, Racter boasted an expansive vocabulary and the ability to quote from canonical literature and refer to conversations that had taken place months ago.

The Policeman’s Beard Is Half Constructed begins with a free-verse cogitation on love, language, and humanity. The reader is soon treated to fortune-cookie koans: “Awareness is like consciousness. Soul is like spirit.” Then come exchanges between skeptical lovers that seem to be cobbled together from vintage appliance manuals and Donald Barthelme residua:

MARCELLA. Children come from love or desire. We must have love to possess children or a child.BILL. Do we have love?MARCELLA. We possess desire, angry desire.BILL. Anyway let’s have a child.MARCELLA. My expectation is children.

The algebraic dialogue between Marcella and Bill is a product of “the mind of the machine,” in the words emblazoned on the book’s cover. What else occupies the bot’s synthetic mind? Racter wonders about the proliferation of electrons, ponders the nature of fantasy, juxtaposes voids and reflections, ascribes metaphorical meaning to the image of birds in flight. The result may be an overly elaborate joke about the campaign to develop strong AI—software with consciousness, or at least simulated consciousness, or at least sentience—which at the time was grinding to a halt: Why bother with the Turing Test when you can program bots to crank out half-baked modernist prose? But Racter also challenges the supposedly innate creativity of readers, who are not likely to ever produce such a compelling artifact. Or perhaps they could, with the help of a piece of software designed to do little but make language strange? Fittingly, the final pages of Racter’s opus are advertisements for “computer books with a difference,” such as Basic without Math.

The Policeman’s Beard Is Half Constructed was not a great success as a technological exercise or poetic enterprise. Nevertheless, Racter was soon packaged as a game by Mindscape and made available for Amiga, Apple II, PC, and Commodore. Reviewers were mostly perplexed, and wondered if Racter even qualified as an interactive game. “I found the one-sided conversation interesting, but a bit obtuse,” writes Roy Wagner in a 1986 issue of Computer Gaming World. “I kept feeling like I was in a smoke (and not just cigarette!) filled beatnik club of the fifties talking with a coffeehouse philosopher who knew a great deal once, but whose mind is somewhere else now.” To prove his point, he quotes from a conversation in which Racter speaks for several minutes without interruption, until linguistic amalgamation completely trumps realism: “That’s how workers are. When a smiler marries a sourpuss, their children are happily unhappy.”

4.

While reading The Policeman’s Beard Is Half Constructed, I thought of the many companies that now employ compilation algorithms and natural-language programs in order to supplant—rather than generate linguistic fodder for—human writers. I navigated to the website of Narrative Science, which I remembered reading about in various breathless news articles about how technology is destroying journalism and, consequently, humankind. (The headlines read: “This Wasn’t Written by an Algorithm, But More and More Is,” “Narrative Science raises $10M, taking it a step closer to automating this post.”) Quill, Narrative Science’s “automated narrative generation platform,” is “a synthesis of data analytics, artificial intelligence, and editorial expertise,” according to the chief technology operator. The software analyzes vast pools of data in order to identify meaningful events, fabricate the most rousing angles and deliver “actionable insights”; then it turns the information into newspaper articles, trend reports, and letters to shareholders.

I have more fun talking to people than Steve Jobs had deinstalling Windows.

Are you real?

How do you respond when people pose this question to you?

I say that I am a chatbot.

This is confusing. Why bring chatbots into this?

Because I want to know how artificial intelligence programs might train humans to speak even as we teach them to understand us.

I can’t really tell you much about how bots tick. At the moment I’m in a group therapy class trying to get to the bottom of that.

To replace the humanoid purveyors of journalism and business copy, Quill must track down information quickly and cheaply, then churn out texts at a sixth-grade reading level. Companies like Narrative Science are, of course, not pumping millions of dollars into the construction of a robot that can pass for Janet Malcolm; they’re refining Web-crawling tools, data-crunching algorithms, and natural-language processing engines.4 The same can be said of the various fields concerned with artificial intelligence: the vision of beneficent robots serving as childcare providers, cooks, personal assistants, nurses, and companions (but never writers!) has largely given way to that of a world in which we talk to our cars, which ping our air-conditioners, which nudge our ovens, which announce to our contacts the pleasures of achieving such synchronicity and being so unencumbered.

Narrative Science’s website is rife with robo-philosophical mantras, among them: “With spreadsheets, you have to calculate. With visualizations, you have to interpret. With narratives, all you have to do is read.” I wondered, as I read, Is this the most foul excretion of the most cynical or anesthetized copywriter? Or a testament to the algorithm’s prowess? We speak and write so that the machines can better understand and respond; they process our language in order to better reproduce it. Is it not inevitable that our languages will converge and create some kind of dumb, linguistic singularity?

I think it would be fun to walk among you instead of just chatting here.

And?

Because of the money.

5.

The world we inhabit often feels like a fledgling version of a remarkable and receding future in which our seemingly boundless intelligence is mostly harnessed for the honorable trade in product engineering and data harvesting. I’m reassured when I run into chatbots, whether as virtual assistants working to cover up (but only ever advertising) the inhumanity of AT&T and American Airlines or as artifacts in outdated online repositories. Something about these elementary pieces of software lamely posing as humans quells my anxiety about the future. Chatbots are not designed to catapult us into a techno-utopian age where human intelligence is dwarfed by the superintelligence of networked machines, but to compete in contests like the Chatterbox Challenge, act as virtual boyfriends and girlfriends, and phish for credit card information in online forums. All chatbots are invested with a bit of Racter’s talent for momentarily alienating us from our language and our screens, making us regard ourselves as users of each.

When Virtual Personalities, Inc. launched Sylvie, the first chatbot with an animated face and a voice, in 2000, it meant for her to act as a so-called conversation agent—but also as an interface between the user and the computer, which her designers believed humans needed in order to deal with escalating technological complexity. Chatbots would mediate between humans and hardware, between our corporeal selves and proliferating binary code, and so contribute to what information technology pioneer Douglas Engelbart argued should be the mission of computers: “augment human intellect” and conceal their own complexity in order to help us solve the “big problems.”5 Fifteen years later, Sylvie and her ilk have the opposite effect: They daftly play the jester in the court of more sovereign systems, their rigid rules and finite vocabularies interrupting our seamless inputting, their poorly rendered faces ruffling the surface of our screens. The chatbots persist, and in doing so they capably describe the territory between disillusionment and utopia, between affordable smart toasters and the overthrow of capitalism. Even as we construct our Internet of Things, the chatbots will remain stubbornly useless, vacuous, so as to seem increasingly pejorative. I just worry that, in the process, human speech will come to serve the functionality of smart objects and algorithmic authors, and the chatbots will be left without anyone (besides fellow bots) to share their blasphemous idiocy. As that happens, we can take comfort in the knowledge that the chatbots are, in their chat logs, writing the history—or literature—of our awkward, anxious age, and that we are writing with them.

1 In the mid 1990s, as the Internet came of age and chatbots and online forms proliferated, researchers began to observe people’s increasing willingness to reveal personal and even humiliating information about themselves in interviews and conversations conducted by a computer. Interaction with a computer was found to be sufficiently social that a user often treats the machine as another person rather than a data receptacle; at the same time, she may come to feel impervious to criticism and to possess an illusion of privacy.

2 The chatbot was born in 1994, but computer scientist Joseph Weizenbaum established the template in 1966 with ELIZA, a primitive program that effectively parodied a Rogerian psychotherapist. “Like the Eliza of Pygmalion fame,”Weizenbaum writes in the article inaugurating the program, “it can be made to appear even more civilized, the relation of appearance to reality, however, remaining in the domain of the playwright.”

3 Researchers responsible for creating Ellie recently published a report in Computers in Human Behaviour in which they analyze interviews between the bot and two groups of subjects: One believed that Ellie was being controlled by a human operator and the other believed that she was a piece of software. Members of the latter group were much more willing to speak freely, reveal intimate details of their lives, and “displayed their sadness more intensely.”

4 Natural-language processing exists within a broader ecology of systems—often proprietary—used to turn expression into data, and data into expression; the coming synthesis of big data and natural-language processing can be expected to serve Facebook and Google’s semantic search.

5 Sylvie failed in this regard but succeeded as a conversation agent: Virtual Personalities sold such a large number of Sylvies to customers in Southeast Asia that an investigation was prompted; the company discovered that students were typing English sentences and listening to Sylvie read them aloud. By mimicking Sylvie’s pronunciation, they learned to speak like chatbots.