Main menu

I often argue that this technological moment is one of tension, of a working through the possibilities and limitations of new technologies while also engaging with the lessons and legacies of antecedent technologies. The utopian me hopes that we’ll push through this tension, arriving at a space of openness and access and opportunity. The dystopian me isn’t so sure. There are a number of factors and funding sources driving innovation, and the trajectory is–as it always was–compelling and unknowable.

But Laurel’s talk of immersion, of sense and synesthesia, is very interesting to me.

I think, for example, of the current push toward 3D televisions and films. Right now, our entertainment industries are very sight-centric, pushing 3D as the major monetization opportunity. Similarly, as we’ve all experienced with our iPads and Xooms, many interface questions connect to touch. Just yesterday, Bret Victor wrote a brief-but-fascinating and must-read piece on interaction design, arguing that our current technologies are “pictures under glass,” and that “they sacrifice all the tactile richness of working with our hands, offering instead a hokey visual facade,” that they “[deny] our hands what they do best,” and that we need to transition out of this interface design asap. (Seriously, read the piece.) I agree. Our technologies are impressively single-minded at the moment. Which isn’t to say that they’re bad; I love the affordances and visions of many of the applications I use on a daily basis. But I worry about the trajectory.

So, reading Laurel, I was struck by the mention of smell. Smell! What happened to smell in our technology? Laurel mentions Morton Heilig’s Sensorama, which seemed both beautiful and bizarre. I had to go to YouTube and investigate.

In that video, Morton Heilig states that there isn’t a modern technology comparable to his vision of the Sensorama. And I’ll extend that argument, asserting that the notion of smell in computing seems at once absurd and retro–as evidenced by Professor Farnsworth’s Smell-O-Scope in the TV show Futurama. (Sensorama, Futurama–these are associative links!) In one episode, Farnsworth invents a telescope that can comically trace smells across the galaxy. Likewise, Futurama often invokes gags regarding the long forgotten Smell-O-Vision. These jokes work because smell is so completely divorced from any of our entertainment or computing platforms; smell seems irrelevant (and, in the case of Futurama, irreverent) in a world of Word documents and iPhone apps. And while I don’t necessarily want fumes of smoke wafting in my face as I watch an episode of Mad Men, I do think that within these gags and early visions of “the future” we can see some semblance of a forgotten potential. And like Bret Victor, I worry about this moment of touch–about its ableist tendencies and the paradigms it’s/we’re building. Just as Aristotle offers Laurel a means of (re)thinking through HCI, I think that the Sensorama might (bizarrely) remind us that there are significant implications to the sensory technologies upon which we teach, read, and produce.

I recently read Thomas Bass’s The Eudaemonic Pie, and I was struck by a particular passage which connects to much of what we’ve read. Early in the book, Bass cites Joseph Weizenbaum, who wrote:

“The computer in its modern form was born from the womb of the military. As with so much other modern technology of the same parentage, almost every technological advance in the computer field, including those motivated by the demands of the military, has had its residual payoff–fallout–in the civilian sector. Still, computers were first constructed in order to enable efficient calculations of how most precisely and effectively to drop artillery shells in order to kill people. It is probably a fair guess, although no one could possibly know, that a very considerable fraction of computers devoted to a single purpose today are still those dedicated to cheaper, more nearly certain ways to kill even larger numbers of human beings” (65).

In our readings, this has become a familiar refrain, and it’s something that I see echoed throughout the study of new media. As these scientists build machines, they wrestle with the guilt of the past and the potential misuse of this new technology. With each additional reading, it seems that the extended metaphors and very specific usage plans are a sort of prescription, a way of communicating an ideal use and means of situating technology productively.

Today, those metaphors extend well into the nuance of interface. Kay and Goldberg were hugely interested in the potential of the screen and of a GUI, and we can easily see the impact of that effort. Now, however, much of that focus has shifted. I found it striking that today, shortly after finishing Kay and Goldberg, a blog post titled “The Metaphors Breaking The Future” appeared in my Twitter stream. If you’ve used the latest incarnation of Apple’s operating system, you’ve no doubt seen a host of visual metaphors–many of which do little to advance the technology or rethink interface. Apple’s iCal, for example, uses the metaphor of a desk calendar, which is helpful in reminding us that the iCal is, well, a calendar app. But the metaphor also hinders the technology: By recreating the paper artifact, it forces us to think in terms of a specific chronology (the paper representation of a month). Sure, this metaphor allows for ease-of-use, but across 10, 20, or 30 years, the inertia of metaphor begins to limit us. And, as Jon Gold mentions in his blog post, those metaphors are often extended into other spaces/applications, limiting the ways we can think of and use technology. This is exactly the kind of thing that Nelson seemed to be pushing against.

There’s a great moment in “Personal Dynamic Media” where Kay and Goldberg write that “If the ‘medium is the message’, then the message of low-bandwidth time-sharing is ‘blah’” (394). Could we today argue that the metaphor is the message? And if so, how do we push our technologies past the tenor and vehicle to find a digital space that might accommodate both Nelson’s vision and Kay/Goldberg’s familiar interface?

And it’s a striking contrast: There is, at moments, an infectious optimism, a thought that we’re entering a brilliant new era. Both Wiener and Licklider see the potential of the computing machine, whether through pathology or symbiosis, to dramatically shift the ways in which we work. But both also are working within the shadow of war: The atom bomb still singes memory, the Korean war so recent, the cold war blooming. We know of and speak of the military/industrial complex, but it’s compelling to see it enacted here, to see these scientists wrestling with the promise of technology in the age of destruction. In particular, Wiener’s allegorical conclusion is absolutely haunting.

The ways in which software has evolved from this moment is equally compelling. I was struck by Licklider’s vision of the computer as something beyond mere computing:

“The question is not, ‘What is the answer?’ The question is, ‘What is the question?’” One of the main aims of man-computer symbiosis is to bring the computing machine effectively into the formulative parts of technical problems. (75)

And, as a writer, I find this assertion compelling because it–and the concept of symbiosis as a whole–looks beyond the normal metaphors for computing, an area where, I think, our writing technologies still haven’t fully moved. If we look at our writing software, much of it remains embedded in the typewriter metaphor, in a linear and vertical movement down the page. And within this metaphor are a number of assumptions about how we think, how we write, and the role of our software(s) in those processes.

Now, this isn’t to say that aren’t many types of software that work with the traditional word processor to add complexity to the writing process. Mind mapping applications, for example, offer a different way to work through invention. But still, as it’s conceived, academic discourse typically returns to the safe space of the page-based metaphor.

So maybe this is one way I circle back–from Licklider and Wiener–to pedagogy. Maybe this sort of vision requires us to reevaluate our comfortable relationships with technology, to assert that the standard word processor really is too much a typewriter metaphor and is stifling symbiosis. Maybe this is the sort of thing that should challenge us to rethink our technologies and our (pedagogical) relationships to them.

Or maybe it’s deeper than that. Maybe mobile computing, as an example, is symbiosis realized. Maybe through mobile technologies we move past the notion that the computer is a desk(or power outlet)-bound tool that helps us complete a single task (working toward the answer), whereas instead mobile technologies situate us within multiple technologies in multiple settings–reframing and influencing the ways we ask questions.

And still, I come back to questions, and the contextual concern for these two pieces is, in many ways, how do we move beyond the shroud of war?

Each summer, I take part in a few friendly poker games with a group of interdisciplinary colleagues. I was invited to these games by a mathematician friend who is, by all measures, eloquent and brilliant. He also teaches probability and game theory, which means I wasn’t keen to wager money against him at the poker table. When he first invited me, I declined.

“I think you’re assuming too much,” he replied. “I’ll play poker against a group of mathematicians any day. But writers? Writers worry me. A mathematician might know how to read the numbers, but a writer knows how to read people.”

And those words echoed for me when I came across two particular sentences in “As We May Think”:

If scientific reasoning were limited to the logical processes of arithmetic, we should not get far in our understanding of the physical world. One might as well attempt to grasp the game of poker entirely by the use of the mathematics of probability. (42)

Because, as I read it, Bush’s piece is about the promise of computation and the relationship between those technologies and the core concerns of knowledge work. Some of the struggles he mentions–“our methods of transmitting and reviewing the results of research are generations old and by now are totally inadequate for their purpose” (37)–could still be heard in many of our fields today. Likewise, his points about the abundance of documented knowledge–and the difficulties in working through it–remain just as relevant. The arrival of digital space has produced this wealth of information (and its economy of attention) and yet we’re still working through its cataloging technologies: search engines, tagging, curating, etc. These are the same concerns of print, of the library. The answer to this problem, I might posit, was never tied up in a particular or potential technology. Rather, it’s in the ways we might use these tools to index, to inform, to create associative meaning.

And association. It’s one of the ideas to which Bush arrives: The potential of the memex to have two juxtaposed panes that facilitate connections. We could take this back to Aristotle’s topoi–his strategies for argumentation–and particularly those of similarity and difference. That advice remains a major component of writing instruction today, and when Bush taps into it (“This is the essential feature of the memex. The process of tying two items together is the importance thing.”), he’s connecting his vision of technology to the classic problems of researching and of communicating. It’s striking how these concerns essentially bookend his piece.

Janet Murray speaks to this same concern in her introduction, and I found presentation of the rhizome especially helpful:

[Deleuze and Guattari] suggested a new model of textual organization to replace the ideologically suspect hierarchies of the print-based world. The new ideal of form was the rhizome–an erudite word for a very down to earth thing: a potato root system. It was as if Deleuze and Guattari had dug beneath the forking path garden of Borges (which after all was still a hierarchy of sorts) and come up with an even more profound labyrinth, but one that offers the hope of knowability and a metaphor of healthy growth. (9)

She’s speaking to the poststructuralist tendency in the humanities, but the message is one of generative work:

The humanist project of shredding culture had found a radical new pattern of meaning, a root system that offered a metaphor of growth and connection rather than rot and disassembly. (9)

This shift is powerful, but it speaks to the ways that technology (and I use that term broadly, as the rhizome, a concept, is a technology itself) can shift our thinking and our practice… if we’re willing to engage the discomfort often associated with it. Within Bush’s vision, we should see a challenge worth rallying to–but also a reminder that the new concerns are much like the old concerns. Our questions today are the questions of old: How we index, arrange, and record material. How we make meaning. How we communicate. But these questions are also generative and purposeful. They are questions that don’t and can’t have answers. And it’s within them that we see not just our technologies–but ourselves. We are all humanists.

I also want to note a couple of problems with Bush’s work, in particular the narrative of technological progress (the war is over, the future awaits, and the way forward requires a capital investment in these technologies!) and the ways that technologies reinforce social structures and values. In each of his examples, the scientist is a he and the person most replaceable by technology is a “girl”–one who “strokes its keys languidly” and looks about with “a disquieting gaze” (40). Acknowledging that this was written in 1945, I think it’s important to address the fact that these values become reinforced with the technologies. The machines, and the institutions producing them, reinscribe our worldviews.