The New Digital Storytelling Series: D. Fox Harrell

In the penultimate part of Filmmaker and the MIT Open Documentary Lab’s interview project with prominent transmedia figures, D. Fox Harrell, Ph.D., Associate Professor of Digital Media in the Comparative Media Studies Program and the Computer Science and Artificial Intelligence Laboratory at MIT, answers our questions. Harrell’s research explores the relationship between imaginative cognition and computation. He develops new forms of social media, gaming, computational narrative, and related computational media systems based in computer science, cognitive science, and digital media arts. The National Science Foundation has recognized Harrell with an NSF CAREER Award for his project “Computing for Advanced Identity Representation.” He has worked as an interactive television producer and as a game designer. His recent book, Phantasmal Media: AnApproach to Imagination, Computation, and Expression, is a Fall 2013 publication with the MIT Press.

For an introduction to this entire series, and links to all the installments so far, check out “Should Filmmakers Learn to Code,” by MIT Open Documentary Lab’s Sarah Wolozin.

MIT OpenDocLab: How did you become a digital storyteller?

Harrell: I began my career using artificial intelligence (AI) approaches for artistic aims such as communicating subjective experiences and reflecting on social issues, rather than the rationalistic or utilitarian aims that AI usually serves. I began by working on narrative, since this is an area of AI research that can be harnessed for artistic expression. I’ve always strived to have facility in multiple disciplines and practices. I have a background in art, computer science, and cognitive science; I value coherence and always sought to understand the synergies and relations between these, making sure to look at how those areas support and reflect one another. For example, constructing AI data structures can be a way of engaging meaning production, and not just seen as a technical activity. It’s a way of structuring how people understand the world, then externalizing that understanding in the system’s structure. The way that people use the system also impacts issues of social meaning. So it’s all quite integrated, in a way.

MIT OpenDocLab: What are the most useful skills for an interactive storyteller? What are the tools of the trade?

Harrell: First of all, I think that social, cultural, and critical awareness and sensitivity are key. You cannot get anywhere without addressing meaning and the world around you. Sensitivity to the human condition comes first, but then you need to express it using an interactive system. Toward this end, I think that computational literacy is quite important. Let’s think about this using film as an example, clearly you can create films without traditional cinematic literacy. For example, think of Stan Brakhage just dropping moth wings onto film stock, right? You can do a lot of different things; it doesn’t preclude someone from attempting to make works in the field without that particular form of understanding. But if you want to do work that is in dialogue with some of the affordances of the computer, then computational literacy is important because it gives us ways of thinking that are useful. I’m not just talking about abstract data-structuring or the coding procedures, I am talking about mental frameworks for thinking through issues of how information can be structured and operated on in systematic ways more generally. That’s computational literacy. Computational literacy has to be accompanied by other broader literacies of image production as well.

It’s one thing just to have a web repository of linear clips, for example, compared with something where, from the bottom up, you’ve thought about what are the interesting ways to navigate the material. So, beyond coding, authors need fluency with different notions of navigating materials. For example, considering the media assets in a database as contrasted to those assets in the order in which they are played are two different things, what Espen Aarseth calls the “textons” and “scriptons,” respectively. Thinking about that difference is one way that’s useful to consider some of these kinds of works. That way of thinking doesn’t rely on coding or specific tools, but it’s a way of thinking that might come more naturally to people who think computationally – who are computationally literate. It’s a type of metacognition of computing that allows people to look at both the output and the computational structures and procedures that let the data live and breathe.

D. Fox Harrell

MIT OpenDocLab: When you start an interactive project, how do you put together a team?

Harrell: Each project is unique and begins with its initial conceptualization. My work is interdisciplinary and builds on dialogue, exchange, and group synergy. When constructing any kind of interdisciplinary team, you have to have dialogue, you have to really build into the project the ability to understand the worldview of the other practitioners.

I am quite interested in this kind of dialogue actually and recently I was the Principal Investigator on a joint workshop of the National Science Foundation and the National Endowment for the Arts, gathering an international group of 50 thought leaders in information technology and the arts: computer scientists, artists, cognitive scientists, learning scientists, and others, to address the question of what art/science hybrid research looks like in the future. Some useful models emerged from that workshop.

Putting together an interdisciplinary team doesn’t mean you have to have complete expertise in every area. A researcher at the workshop named Gerhard Fischer described a model where you have a main brain and a little brain on the side with knowledge in another area. It seems like this model could be quite useful for interactive filmmaking because sometimes it’s not about the details or thinking in one mode of media production, or even the interactions the system enables, but rather it’s about how people frame what the system does.

One of the things we found in these workshops is that too often there are projects in which you have the technologists and then you bring in the artists, it’s seen as outreach or just bringing in people to put a shiny veneer on a project. The same was found in reverse, projects start with artists and then bring in technologists just to implement the artists’ vision.

Through shared computational literacy, an interdisciplinary team can come together with an understanding of the characteristics of the medium of expression. And that’s not to deny the role of amateur production, either. I don’t want to say categorically that everyone should have the same literacies, but I do want to say that it’s an important factor to consider for projects. I think it’s fine to put together teams, and it’s best if they’re complimentary, but if you just want to have that shared vision, it should be based on some kind of mutual understanding.

MIT OpenDocLab: Are there any systems that have inspired you? How does the role of a director change in such systems? What are the creative challenges?

Harrell: Let’s begin with an example, Steffi Domike, Michael Mateas and Paul Vanouse who created an artificial intelligence-based documentary system called Terminal Time at Carnegie Mellon University in the 1990s. Terminal Time produces documentaries of the last millennium of history, so it’s a very broad documentary! It was a collaboration between a documentary filmmaker, a computer scientist and an artist, but each of them was really an interdisciplinary practitioner. Each had experience in multiple fields. So, what the system did was periodically ask the audience questions — multiple-choice questions. The audience would applaud in response and then a volume meter was used to associate that applause with one of the choices. So, the system might ask controversial questions like, “‘What’s the biggest problem within the last millennium? Technologies taking over, religion, people’s social perspectives, or social roles destabilized by feminism,” etc. They’re quite politically charged points of view! Based on what the audience chooses, the AI system takes their choice and uses it to create a narrative that rhetorically exaggerates their point of view. This means that when the system is being performed, it doesn’t just give you what you want, but it gives you what you want in a form that might be unpalatable. So let’s say you indicate you want a history that lacks all diversity, the system might push that even further so that the documentary becomes very racist, like a White Supremacist kind of history. And if you say technology is the problem of the millennium, then it becomes a complete Luddite kind of history.

The idea is that they’re critiquing that kind of authoritative voiceover-driven work, and sort of saying that there’s no ultimate authority, these stories, these documentaries, are based just on the rhetorical spin that you create. One thing the authors emphasize is that the system’s functionality is for understanding it, because if the people don’t have the idea that the system is doing this complex AI processing in order to construct the narrative, that the narrative wasn’t just canned, then they get a lot less out of it. Understanding that it is an AI project, and that there’s some mechanism behind it, and perhaps even what that mechanism is, helps the audience to understand the authors’ ideological perspectives as well.

So, to make a long story short, who’s the director in Terminal Time? The story is co-directed by audience, system, and author in a very technical and meaningful sense. The authors are more like curators – or authors of a jazz score that the audience and system fill in by directing content selection and by interpretation.

MIT OpenDocLab: How does the role of the audience change?

Harrell: I’ll answer that by giving another example. I’ve developed a platform called the GRIOT system for building AI-generated multimedia narratives and poetry, and I’ve learned that people interpret the same material differently depending on whether they think a human has generated it or they think a computational system has. Often people have gasped and been quite surprised at the quality of the system’s output. The poetry can have fixed narrative structure, for example, but each time you run it the poem will be generated differently with completely new metaphors, tone, and other forms of figurative language. The poems I created with GRIOT address the subjective nuances of the themes they address. However, in one case an author using GRIOT implemented poems generated to have periodic repetition of certain lines, like the refrain “Nevermore” in “The Raven” by Poe, right? So that’s something that naturally you wanted to avoid, having too many repeats. In this case, it was built it in purposely and people read it as being maybe a flaw in the system, that maybe it wasn’t checking the system enough. So they thought it was computational; computers tend to do things in a procedural, repetitive way, but actually in this case it was something the human, the artist, wanted. So just the framing of it as a computational system changed the way it’s read. So, the audience changes because the audience looks not just at the content, but at the computational, or interactive, nature of the content. They look at whether it is meaningful and what the role of the computer was in its creation.

And all of this goes beyond computational literacy. We need to understand the values that go into the computational structures of systems as well. There might be some production techniques that are recognized as cliché or they might be seen as passé in some way, if you’re kind of literate within a field. It’s entirely a subjective cultural matter to determine what’s vital and what’s passé in domains like filmmaking. Say you have some effects, for instance from the 70s, you see them in one film and they are considered kitsch but in another they are retro-cool – you know it because you can tell, but how do you really differentiate? There’s a similar sensibility that we might call computational-cultural literacy, it’s a sensibility that gives practitioners insight into how the system is going to be read by an increasingly culturally and computationally literate audience, as well.

MIT OpenDocLab: How do you find funding for digital interactive storytelling projects?

Harrell: My funding has primarily come from foundations such as the National Science Foundation and the National Endowment for the Humanities, along with funding from universities. This means that there need to be contributions from the work to areas in the sciences and humanities, but also that the arts can play roles in such fields and vice versa.

MIT OpenDocLab: Where do you see the future of this field going?

Harrell: I’ve just written a book called Phantasmal Media: An Approach to Imagination, Computation, and Expression (MIT Press) that speaks to this question. In Phantasmal Media, I argue that great expressive potential of computational media comes from their ability to construct and reveal phantasms— blends of cultural ideas and sensory imagination. I think that prompting the phantasms, the kind of combined mental imagery and cultural worldviews that art forms such as literature and cinema do, but in uniquely computational ways is the next frontier. That is, interactive stories are one means by which computers can be subjective, cultural, and critical systems that help us to better understand our lives and each other.

Online social networks and video games are prevalent in today’s society, and using both video game characters and social networking profiles cam potentially be used to help people better understand others’ experiences, delivering meaningful experiences which enable critical reflection upon one’s identity, and on others’ experiences related to identity. However, merely customizing graphical representations and text fields are insufficient to convey the richness of our real world identities.

The Living Liberia Fabric, produced in affiliation with the Truth and Reconciliation Commission (TRC) of Liberia, is an interactive, web-based narrative supporting the goal of lasting peace after years of civil war (1979-2003). It links concerns for liberation, dignity, and the future with needs for cultural foundations, human rights, truth, and reconciliation.

Loss, Undersea is an interactive narrative/multimedia semantics project by Fox Harrell in which a character moving through a standard workday encounters a world submerging into the depths — a double-scope story of banal life blended with a fantastic Atlantean metaphor. As a user selects emotion-driven actions for the character to perform, the character transforms — sea creature extensions protrude and calcify around him — and poetic text narrating his loss of humanity and the human world undersea ensues.

The Generative Visual Renku project presents a new form of concrete polymorphic poetry inspired by Japanese renku poetry, iconicity of Chinese character forms, and generative models from contemporary art. Calligraphic iconic illustrations are composed by the system with both visual and conceptual constrains in response to user actions into a fanciful topography articulating the nuanced interplay between organic (natural or hand-created) and modular (mass-produced or consumerist) artifacts that saturate our lives.