Saturday, August 16, 2008

Expanded cinema isn’t a movie at all: like life, it’s a process of becoming, man’s ongoing historical drive to manifest his consciousness outside of his mind, in front of his eye. One can no longer specialize in a single discipline and hope to truthfully express a clear picture of its relationship in the environment.

— Gene Youngblood, Expanded Cinema

INTRODUCTION"Means and Meditations" is an 11-minute experimental video, a journey through three abstract worlds: subterranean, terrestrial and atmospheric. The project, which has consumed the better part of the past year, is both a burden and a joy. Mostly, it is an education. Some of the lessons are purely technical — discovering the potential and limitations of video editing software, for example, or the properties of acrylic paint. Other lessons are intellectual — deciphering the mysteries of narrative that Roland Barthes describes in "Image - Music - Text," for instance, or immersing myself in the poetry of Stan Brakhage’s visual language. Creating the video is an evolving process, beginning with the idea, organizing it into a coherent narrative structure, inventing the imagery to express that narrative, and assembling it into a finished product. Each step relates to the next, and all are equally important.

While working on my video, I simultaneously keep track of all the material experiments and intellectual exercises made along the way. I rigorously document the process of its evolution, acknowledging that product cannot exist without process. I better understand how I operate as an artist by scrutinizing the choices I make. Creativity requires patience — sometimes, taking the day off and not working is more fruitful than sitting all day in front of the computer, endlessly making and remaking the same image. It requires curiosity to explore new media when old media fail and to seek inspiration in the works of others. And it requires the discipline to examine your work with a critical eye, to be your own harshest critic. I believe Means and Meditations reflects all these things. It is by far the most visually complex, most deliberately constructed, most intellectually and emotionally compelling project I have created to date. This report chronicles its conception, evolution and completion.

PROJECT DESCRIPTION"Means and Meditations" is conceived as a progression through and transcendence of three environments, beginning in the subterranean, surfacing on the terrestrial, and then ascending into the atmospheric. Ultimately, the imagery evaporates, suggesting another plane, a kind of cosmic ether. It should be pointed out that this description of the video is my own interpretation. I do not intend it to be the only valid means of interpretation. I fully expect, rather, for each viewer to draw his or her own inferences, which may or may not coincide with my own.

The video opens in blackness, out of which a crackle escapes, pricking the ear. In the distance, a white form emerges and expands, hovering and quivering like a spectral light at the far end of a cave. The form recedes; the blackness consumes the frame again. Another form emerges, smaller, more tentative, and slips away with a squeak. A third form appears, pulsing with life as it expands and contracts, its form gradually shifting. The quivering form continues to divide and grow, under the pull of some mysterious mitosis, driven by a rhythmic, mechanical pulse. Bioluminescent cells emerge and multiply until the entire screen is filled, swirling faster and faster as they become a maddened swarm. Color seeps in, first green, then blue. Bacteria? Viruses? Protozoa? Something not of this world? The visual texture and rhythm accommodate new elements that emerge — a spring, a geyser, a riverbed, a channel — while white noise fills the soundtrack. Swirls of white bubble to the surface, while the soundtrack crackles with kinetic energy. There is a transcendence of strata.

The frenetic imagery and pace recede, dissolving into new imagery: rich, painted surfaces reminiscent of Landsat photography. A lush, blue and green world gently wavers below as it glides past as the sound of water dripping echoes in the distance. Slowly, it slips away, dissolving into another landscape, still with blues and greens, but also browns and yellows. Layers materialize, adding depth; the dripping dissipates. The landscape palette becomes muted. More layers appear, their surfaces textured, their contours sharp. They follow their own trajectories, some mirroring the base layers, others traveling in opposition, still others following diagonal paths. New sonic effects emerge, suggesting winds whistling aloft and distant short-wave radio transmissions. Yet the overall pace remains languid, tranquil; there is time to linger and pick out interesting elements, taking pleasure in the colors and textures on the screen as the imagery shifts from cool blues and greens to warmer reds, oranges and earth tones. Darker, jagged forms gradually congeal, moving in opposition to the landscapes below. Distressed and dirty imagery — reed-like scratches, swirls and rays — appears, urgent and unsettled. Colors become murky, forms crowd the frame competing for dominance. The soundtrack vibrates with unidentifiable digital noise; velocity increases.

The trajectory of the imagery shifts from horizontal to vertical, suggesting ascent. Streaks of yellowish green pulse and glow like shafts of light, while layers of color beneath shift from red and orange to blue and green. Low-frequency tones resonate. Cell-like forms appear, reminiscent of those in the first section of the video, as the upward trajectory accelerates. Softer patches of color seep into the frame, hover and then slowly dissolve. The palette shifts from warmer tones to cooler ones. The velocity slows as the forms begin to draw closer. A wider cosmos materializes, a multi-colored nebula, or perhaps a star-filled galaxy, maybe another universe entirely. It pans and expands, enveloping and consuming. The low-frequency tones give way to a rush of white noise as the picture evaporates into darkness, revealing another void. It is an end, and a beginning.

CONCEPTUALIZATION AND ORGANIZATIONThe video’s genesis begins as any artistic endeavor might, with an idea or theme. In this case, the idea is of a journey between strata or worlds. Like many visual artists, my task is compounded by the fact that I will not be relating this concept with words but with abstract symbols. In order to convey this, I must craft a new vocabulary, a strictly visual one, to describe the environment that my video (and those who see it) will inhabit. I begin as any writer would, taking an idea, describing it as fully as possible, then using those descriptors to define a series of actions and characteristics that will drive the narrative from one point to another. This process will guide me as I create my filmstrips and serve as a basic organizational principle once I begin assembling the video itself.

The language of the imageIn his book "Expanded Cinema," Gene Youngblood describes the artist as a design scientist who creates a new visual language in order for the viewer to experience and understand what is seen. I liken the design scientist role to that of an architect. The video, like the physical structure, must be designed to fulfill a function (in the case of the video, to contain and convey the narrative), as well as evoke thought and emotion though material choice and construction (whether that material is warm, textured brick versus cold steel and glass used in the structure, or washes of cool blue and green acrylic paint versus a dirty, mottled photocopied pattern on the filmstrip). As a design scientist, my first responsibility is to define and codify that visual language for myself. It is a months-long process that begins before a single frame of film has been created. I begin by describing the film — first informally to friends, then formally in a written proposal — and sketching out a three-part narrative. Although this is a classic structure, I do not intend for the video to adhere to convention by culminating in a climax and then neatly resolving itself. The video begins in one stratum, lingers briefly, transcends to the second, reveals that layer, and then ascends to the third. The conclusion, deliberately ambiguous and open-ended, suggests another beginning as much as a resolution.

Already, a language for the video is emerging: strata, worldly, cosmic, transcendence, ascending. I push at this. These words suggest an upward mobility, a purposeful trajectory. But, if the cosmos — the ultimate outside — is the destination, where does the journey begin? From within. Or in a worldly sense, from beneath, from the fiery core of the earth to the surface of the land. This subterranean world does not have to be literally fiery or hot, but it does suggest that from out of the blackness, an energy emerges, rises to the terrestrial surface, and from there, ascends to the cosmos.

During the initial image-making process, I adhere loosely to these themes. I am more interested in generating content (and experimenting with various media) than I am in adhering to prescribed categories. Once it is time to assemble to images into a video, I face a quandary. How am I to get from A to Z, much less decide which filmstrips and scans were subterranean, terrestrial or atmospheric (to say nothing of where each might reside on the video timeline)? I solve this problem by constructing a storyboard, something traditional animators and filmmakers have used for decades as an organizational tool for narrative construction. For each of the scanned images, I capture a digital still image, which I then print as a 3-inch-by-2-inch color cutout. For the filmstrips transferred with the Steenbeck, I capture stills at points where the imagery is particularly appealing and print these as well. On a large 2-foot-by-4-foot board, I draw three large, intersecting rectangles, one atop the other, each representing one of the narrative strata. I consider each printed still, asking of each: which subsection does the image belong to, and where, hierarchically, does each image exist on the storyboard in relation to the other stills?

Certain images readily lend themselves to specific positions on the storyboard. For example, the applied paper filmstrip, manipulated digitally to resemble a spectral white orb that throbs and grows, suggests a mysterious energy that emerges from the darkness of the subterranean world. It becomes the initial image of the video. A painted 35mm filmstrip, scanned and animated, bears an uncanny resemblance to a cosmic nebula. I use it as the concluding atmospheric image. Other images, however, are more troublesome. Is a particular painted, scanned image of filmstrip terrestrial or atmospheric? What strata should the photocopied and applied paper images occupy?

Organizing the timeline ultimately takes five attempts before I am satisfied with the overall visual order. It also provides an initial weeding-out of visual elements. In each of the five attempts at organizing the storyboard, I discover that certain visual elements simply will not fit within the overall visual construct. Sometimes, this is because certain images are too similar to elements I have already used; other times, the imagery is visually appealing on its own, but too dominant to integrate into the overall narrative. The narrative diverges in the terrestrial section between strips painted with warmer colors like browns and reds, and strips painted with cooler blues and green. These two distinct visual threads converge at the transition to atmospheric, then part again. Here, the imagery is divided between painted strips and those with photocopied or paper imagery. This will pose a problem once I begin assembling the visual elements in the computer, since I intend to follow only a single narrative thread. But at least I now have a roadmap to guide the assembly process.

Assembling the narrativeHaving created this map on paper, I begin constructing the video in Final Cut Pro. The initial assembly involves a simple laying end-to-end of each visual element; where the narrative diverges on paper, I create a separate, tandem layer, which I will later attempt to reconcile. I determine a basic animation behavior for each element (horizontal or vertical pans, zooms) but leave the detailed effects (transitions, velocity, layers) until later. A complete, if rough, video slowly emerges. As it does, extraneous visual elements — particularly in those areas where the narrative diverges on paper — begin to fall away. The editing process in this second stage is, in many ways, intuitive. Some elements I cannot reconcile with others. Other visual components are redundant.

Once the basic linear narrative structure is complete, the video requires further refinement and articulation. To this point, I have established the visual symbols of representation without endowing each element with its own set of characteristics governing motion (directional and temporal), appearance (focused or blurry, large or small), and so on. The video still lacks transitions that will allow for seamless transcendence between individual elements and strata — and most importantly, cannot yet be fully “read.” I need to expand my mental image sets if I am to develop more visually articulate imagery.

With the video playing, I compile a list of descriptors — adjectives, mostly, but also nouns, verbs and the occasional phrase — that the imagery inspires as it progresses from beginning to end. I repeat the process a second, and then a third time, until I have compiled a list of 55 words and phrases. From this, a vocabulary emerges, as do sub-narratives (back stories) concerning certain elements within the video (for example, a bank of “clouds,” a recurring motif during transition between terrestrial landscapes, harbors spirits or specters watching over the viewers). The characteristics that each element possesses fit almost seamlessly within the large narrative already established. The terrestrial movement advances through distinct biomes, from temperate to arid: a rainforest gives way to deciduous forest and savannas give way to deserts. Even sub-narratives that I eventually discard (the cloud specters) are useful in that they force me think of the various visual elements as living, breathing characters, rather than flat, inanimate forms.

Returning to the construction of the video, I question the visual elements a second time. The first painted and scanned element of the terrestrial section reminds me of algae and photosynthesis; it is primordial, tropical. Given these characteristics, I ask: how does algae behave? How does photosynthesis work? How might this element move, form layers, etc.? If the narrative structure is a three-part journey and each element its vocabulary, then these specific characteristics are its grammar — all of them crucial elements not only in constructing the video, but in its interpretation as well. For example, algae in a pond only move when stirred by a stiff breeze that causes ripples in the water. Although it teems with life on the microscopic level, it appears languid, lazy, simple and staid when seen with the naked eye. This first terrestrial image, I decide, ought to reflect this by moving slowly across the screen. I scrutinize each image in similar fashion, and as I do, the video comes to life.

IMAGE-MAKINGTo create the imagery in the videos, I use three techniques: an applied paper process, photocopying onto clear 16mm leader and painting on 16mm and 35mm leader. While I rely upon many conventional cinematic effects to assemble the video, I do not employ a film or video camera to generate any actual imagery.

Applied Paper ProcessOne of the primary techniques I use (which I used first in my prethesis video, "Floating in the Ether") involves lifting printed images from newspapers and magazines using 16mm splicing tape. I refer to this technique as the “applied paper process.” The method is unconventional, if not wholly unique. I am aware of one filmmaker, David Gatten, based in Ithaca, New York, who used Scotch tape to similar ends in producing "Moxon's Mechanick Exercises," but interviews with Gatten offer scant information on the specifics of his process. With my technique, I apply the tape to a page (typically from a newspaper or glossy stock magazine), and then soak the page briefly in warm water to saturate the paper and loosen the adhesive. Next, I pull the tape gently from the page and rub off whatever paper still clings to the adhesive. A thin layer of ink from the printed image is left behind. The process reminds me of the way Silly Putty can pull an image away from newsprint. Results vary, depending upon the weight and grade of paper and the variable quality of the adhesive on the splicing tape. The process generally leaves the image intact upon the surface of the tape and sufficient adhesive for the next step: adhering the tape to clear 16mm leader.

In collecting source material from magazines, I am attracted primarily to image dichotomies: shots of nature unspoiled by man (the meandering shoreline of a lake / the cracked, windswept desert floor) and pictures of the man-made (the steel girders of a suspension bridge / a winding rural highway). I look for two things: 1) interesting shapes and patterns which, when extracted from the whole, become arresting visual fragments, and 2) specific images (such as a face, a clock, a pressure gauge) that when lifted, may remain intact but are not be readily identifiable when set in motion. I avoid iconic images, because I want my subject matter to be suggestive or evocative without being readily connotative so that viewers have the most leeway possible to interpret the imagery. Likewise, I don’t agonize too long over a photograph; I quickly flip through the pages of a magazine, pulling what appeals to me, leaving what doesn’t. I limit myself to black-and-white photographs. In past project, I have worked with both monochromatic and color imagery. While black-and-white reproduces well, experiments using color have been disappointing, tending to appear dull or washed out.

Once the tape has been applied and images lifted, I assemble the strips of splicing tape end-to-end on the clear leader, applying them one after another in a linear fashion. Aspects I consider when assembling the strips of tape include the presence or absence of residual paper fibers; the lightness or darkness of an image, as evidenced by contrast; the general opacity or transparency of the image; and geometric patterns, lines, and positive or negative space. What unifies the imagery is the dominance of visual texture, and above all, the screen-like ink pattern left by the printing process, which when projected creates a vibrating surface. The filmstrip becomes a scroll of sorts, with each strip of applied and image-laden splicing tape functioning like a phrase in a paragraph. A dialogue unfolds within and between the strips of tape as I assemble them. Each preceding strip informs the strip that follows. In doing so, I create an overarching visual narrative for myself, one that contains particular points of articulation: complementary imagery (the structure of a tree trunk / the spans of a steel girder bridge) and oppositional imagery (the organic folds of an elephant’s hide / the cracks in a desert floor).

Despite the rich imagery and texture contained within each filmstrip, very little of it will ever been seen in an unadulterated form. It is transformed during the transfer process and again in the assembly environment. This is not to say that the original strips that make up the film cannot be seen. Several of Stan Brakhage’s handpainted filmstrips either are framed or reprinted in publications. Bruce Connor frames some of his films in their entirety, notably "Cosmic Ray" and "Ten Second Film." I, too, display painted filmstrips of past projects like "Hurricane" and "Surfacing."

The relationship between spectator and strip is fundamentally changed from an ephemeral, visual experience when the filmstrip is projected, to a tactile, tangible object when it is framed. There is no longer the sense of perpetual present tense that a moving image implies; the still image suggests instead a static past tense. Each frame is as it was when it was created. Each exists independent of the frame before and the frame after. The strip also can be viewed as a canvas, rather than a collection of discrete frames, since each strip is painted as a whole, rather than frame-by-frame. The strip in a frame can be touched and carefully inspected, rather than merely beheld on a screen. Details that would never been seen if the strip were projected are readily apparent. Color, line and texture are represented as they would in a painting. In short, framing the filmstrip treats it as static visual art, rather than animated visual entertainment.

Photocopying on filmA second technique, similar to the applied paper process, involves photocopying directly onto the 16mm leader. This mode of transfer requires an inexpensive, desktop photocopying machine (high-end, professional machines tend to jam). The process is relatively straightforward. I first affix strips of clear 16mm leader to a sheet of paper, emulsion side up. My source materials again are black-and-white images printed in newspapers and magazines. I next lay the original image on the photocopier glass and place the sheet with the attached leader into the manual-feed paper tray. I activate the photocopier, and a portion of the image is transferred onto the leader.

As with the applied paper process, photocopying distills image fragments from a whole, although in a less precise way. With the paper process, I apply tape to the precise portion of a photograph I wish to isolate. With photocopying, when I affix the leader that will receive the image to a piece of paper, I can only approximate the location of where the source image will fall. Some of it ends up on the paper, some it ends up on the leader. The resulting images on the leader always are different than what I expect them to be. They appear to have been sketched with charcoal — the photocopied image looks mottled and uneven. When projected, the image is lighter and has less contrast. It feels grittier, dirtier than images lifted with splicing tape. This grittiness becomes the unifying characteristic when I assemble the sequences of photocopied strips.

Painting on filmThe third technique I use in Means and Meditations is painting on film, a technique favored by numerous experimental filmmaking pioneers, including Stan Brakhage, Harry Smith and Norman McLaren. Like these artists and those who paint on canvas or paper, there are a number of choices I must make before I begin painting. I have to consider pigment type (acrylics, watercolors or inks), brush type (sable or synthetic, size and style) and other marking and application media (sponges, foam brushes, cotton swabs, Sharpie pens, etc.). Equally important is the selection of the color palette itself: Do I limit myself to the primary colors or more subtle hues? Do I choose from the warmer end of the color spectrum, such as reds, oranges and yellows? Do I choose cooler colors, such as blues and greens? Perhaps earthier tones? Often as not, I rely on some kind of developed instinct, one Henri Matisse referred to in "Notes of a Painter": “The chief aim of color should be to serve expression as well as possible. I put down my colors without a preconceived plan. If at the first steps and perhaps without my being conscious of it one tone has particularly please me … when the picture is finished I will notice that I have respected this tone while I have progressively altered and transformed the others. I discover the quality of colors in a purely instinctive way.” Matisse’s instinct clearly was finely tuned and not at all naïve, as his statement may suggest. My instinct, too, becomes more discriminating as time passes. My first choice or two is instinctive, yes, but subsequent decisions I make based on my assessment of those initial choices — acrylic paint, India inks and watercolor have unique characteristics, and not all have an affinity for celluloid. It becomes a cumulative, informed experience.

But why use colored paints at all, when I reject the use of color imagery in the applied paper process? Simply, I want color — vivid color — in my video. I want to endow the imagery with a vitality that I feel black-and-white alone cannot convey. Past experiments with using color imagery in the applied paper process have not produced the vibrant kinds of color that I find paints and inks could give.

I begin painting with India ink, which Brakhage favored in his work. India ink binds well with the celluloid, is translucent, is fairly versatile in its application (it creates fine lines when applied with an ink pen or Rothko-esque washes of color if applied with a foam brush); most importantly, it allows light to pass through when projected. India ink has its limitations, too. The ready-made colors come in a fairly limited palette and they do not blend easily to create new colors. Given their fluidity, they do not lend themselves to layering in the way a thicker medium like acrylic does, and my ability to control opacity is limited. Initially, I use a color wheel to guide my selection of inks, pairing complimentary colors, for example. But after a few early experiments, where I explore how the ink behaves as a medium, I begin, like Matisse, to gravitate instinctually toward a range of colors. Blues and greens emerge as the dominant colors of my palette early in the process. After I generate a considerable amount of imagery with these colors, and as I develop the strata and consider how to represent each one (especially the terrestrial realm), earth tones begin to dominate.

I also use acrylic paint and ink. These media offer me many things India ink does not, namely a wider color palette, the ability to create textured effects with the stroke of a brush or the dab of a sponge, and more control over opacity and transparency. Acrylic has a tendency to betray the gesture of the brush stroke in a way India ink does not. A swipe with a flat brush leaves a textured corduroy-like pattern in its wake which, when scanned or projected, comes to life as continuous, gently flowing river. When dabbed onto the celluloid with a flat brush, acrylic creates a visual texture that is more staccato; when applied with sponge or cotton swab, it leaves an uneven, mottled texture resembling stone or brick. Distressing this medium on the celluloid with a wire brush or sandpaper adds another layer of visual texture. A sharp twist of the brush leaves spirals in its wake, which dances like bubbles or balls when projected. A hatching gesture produces thin, reedy lines; sandpaper creates a worn, aged texture. Rubbing alcohol acts like a solvent; a dribble of the liquid leaves behind a beautiful trail resembling a dry riverbed.

I also experiment briefly with watercolor, which has a similar capacity for translucency as does India ink, but also acts like acrylic in that it can be opaque and receive brush texture well. However, watercolor does not bind easily with the celluloid, and once dry tends to crack and peel, making it an unsuitable medium for painting on film. The few filmstrips I do paint with watercolor I must scan as soon as they dry, and even then, the paint quickly chips and flakes off. I spray clear acrylic fixative on the strips once they dry in an attempt to prevent them from deteriorating, but the paint still peels away after about a week.

Later, I take inventory of which strips I use and which I leave out of the final video. The majority of strips I select are painted with acrylic; I use no strips painted with India ink. I find this somewhat surprising, since India ink is the dominant medium used in previous projects, but not entirely. Acrylic proves to be the more versatile medium in almost every way.

THE TRANSFER AND ANIMATION PROCESSCreating the imagery is but one step in the process. Once the strips have been assembled, they are animated, which is a two-step process. First, the imagery is captured digitally through one of two devices: the Steenbeck flatbed film editor or the desktop computer scanner. Next, the imagery is imported into Final Cut Pro, a common video editing software package. There, it is manipulated further and assembled into its final form.

The SteenbeckPrior to the late 1980s, before digital technologies revolutionized the way films are edited, flatbed film editors such as the Steenbeck were the most common means of editing movies. The mechanics of the device, with its lens and prism, emulate the lens and shutter of a film projector, breaking the filmstrip down into a series of frames which move through at a rate of 24 frames per second, creating the illusion of motion as the image is projected on a small screen. As film entered the video age in the ’80s, some Steenbecks (including the one I use) were modified with internal video cameras so film could be transferred to videotape.

The device is fairly simple to operate. The film reel is placed on a motorized plate to the left, threaded through the projection system, and attached to a take-up reel atop another motorized plate on the right. A shuttle controls the movement of the film through the projector, which can be run at a fixed 24 fps, or at a range of speeds from slow to fast. Detail suffers, particularly when using a Steenbeck as opposed to a high-quality professional telecine transfer process. The video camera’s lens tends to be somewhat nearsighted — the image is soft, slightly blurry and the fine details are lost. The camera “sees” everything through a filter of sorts — whites are not truly white, but have a yellowish cast. Some of this can be corrected, if so desired, in Final Cut Pro, but not completely. Generally, I treat this as an aesthetic characteristic of the process.

The scannerA consumer-grade computer desktop scanner, on the other hand, treats the filmstrip not as a moving image, but as a static one, akin to a photograph. I use the scanner in the same way I would if I were digitizing a picture or a document. I place a filmstrip on the scanner bed; I scan the image at a particular resolution and import the digital image into editing software like Adobe Photoshop. Detail is far sharper; color much truer to the original strip than with the Steenbeck. The scanning process affects how much of this detail can be seen. For example, Final Cut Pro assumes a screen dimension of 720 pixels wide by 480 pixels high. A strip of film scanned to fit these dimensions will fill the video screen, but will not permit much magnification of the image. Scanning the strip at a greater pixel dimension than needed to fill the screen allows me to delve into the strip and explore the subtle detail of the magnified image once it is imported into Final Cut Pro.

The process fundamentally alters the boundaries of the image, transforming the strip from being perpetually in motion, evolving and seemingly limited only by the frame of the screen, to an image with clearly defined perimeters. Navigating the image is limited to the horizontal (X) and vertical axes (Y), as well as the third dimension of depth (Z axis). The strip can be manipulated in image editing software, just as any photograph. Holes can be cut into the image which, when imported into video editing software, allow the viewer to peer through the image to layers beyond, be they other stills or a moving video image. Elements (a particular swirl or blob of paint) can be extracted from the whole, creating discrete elements that may operate either in tandem with the source image when animated, or freed to inhabit the screen on their own accord. Yet it is difficult if not impossible to escape the feeling of confinement that this photographic method imposes on the image.

Final Cut ProOnce the filmstrips are digitized via the Steenbeck or scanner, I import them into Final Cut Pro. The software functions primarily as a video editing tool, and serves much the same function as the Steenbeck. Individual video clips can be played back and forth, edited for content and duration, and assembled on a virtual timeline to construct a complete narrative. The program also contains a number of digital filters and effects. Some of these effects (fades, for example) mimic those that can be created in film using an optical printer. Other effects govern properties unique to video (color balance, brightness and contrast, or hue and saturation). Some of these effects, like a simple fade, function as they would on celluloid, while others, such as the superimposition, exhibit fundamental differences between the film and video environments.

Despite this preponderance of digital effects and filters, I limit myself to a few. One effect I frequently use is the Gaussian blur. This filter allows me to control the degree to which a layer of video is perceived as in focus. I use this filter frequently in the terrestrial portion of the video to create a sense of depth between layers, much like a telephoto lens on a camera would. Another effect I find useful is speed control. This effect allows me to control the speed at which the video clips travel, and indirectly, the mood of the piece. Slow motion may evoke a sense of tranquility, while a faster velocity may feel urgent or rushed. I also use the opacity effect frequently. This effect controls the degree of transparency a video layer has. I use this filter as I do the Gaussian blur to create a sense of depth by making some layers semi-visible. I also employ a number of video composite effects, which control how two or more layers are superimposed.

By manipulating sharpness, brightness and contrast, what were once crisp images of figures or landscapes become the luminous blobs seen in the first section of the video. By superimposing multiple layers and adjusting how they are composited, I can submerge the paper beneath layers of painted filmstrips in the third section. While these segments are effective, functional, even hauntingly beautiful, I feel a small sense of loss for the original filmstrip. Like strata of earth or rock, the unaltered paper process images exist for a brief moment, and then are buried and forgotten.

It is difficult for me to speak warmly of the digital video portion of the production process. The computer is where the video comes to life, and it can be exhilarating to see a filmstrip move and breathe. But using a computer is primarily a cerebral endeavor. It lacks the tactile, physical pleasure I find in painting on celluloid; I can’t get my hands dirty by pressing buttons on a keyboard. Painting engages my whole body. A delicate line requires a gentle flick of a finger. A distressed swirl, a violent thrust of the wrist. A wash, the bold sweep of my entire arm. I stand. I sit. I crouch. Sitting passively in front of a computer for hours in a windowless room is a poor substitute for the kinetics of painting.

THE SOUNDTRACKI intend from the beginning to endow the video with a soundtrack. Especially for people of my generation and younger who were raised on a steady diet of MTV, audio can vastly expand the emotional and intellectual impact of a moving picture. Responsibility for the soundtrack rests primarily with my composer and fellow graduate student Michael Vernusky, who is pursing a Master of Music degree in composition at the University of Texas. Our conversations about this project begin during earlier collaborations, namely Surfacing, which was inspired by his short composition for guitar and digital media entitled “Selah.” What attracts me to his work is a willingness to explore the electronic realm of composition, and especially his ability to use digital media to distort and transform traditional analog instruments (such as the piano) in decidedly non-traditional ways (drawing a bow across a block of Styrofoam or the strings of an acoustic guitar). Given my own interest and experiments in using analog film techniques within the digital realm, ("Floating in the Ether"), collaboration with Vernusky seems a logical step.

In constructing a soundscape, I ask two things of Vernusky: avoid recognizable instrumentation and don’t make the soundtrack melodic. I believe it essential that the soundtrack reflect the abstract nature of the visual content. As a culture, we are accustomed to music accompanying an image in nearly everything we see and use to communicate with: the cinema, television, commercials, the radio, even our cell phones and computers. When played or performed within the context of these media, music (especially melodic compositions) becomes a simple prompt. It assumes a decorative value with clichéd emotional connotations (to say nothing of the additional connotations that a recognizable singer or composer adds to the equation): the lighthearted pop song, the dark and brooding techno score, the stately classical composition. It is what differentiates non-commercial entertainment from the commercial, which Gene Youngblood criticizes in stating, “Art explains, entertainment exploits. Art is freedom from the conditions of memory; entertainment is conditional on a present that is conditioned by the past.” I feel similarly about the role of the soundtrack. An abstract score, like abstract imagery, creates a space for the mind to wander freely, drawing conclusions and making associations where it will. A traditional score offers no such freedom, describing instead a tightly circumscribed and predictable set of conditions.

We begin our collaboration in late fall with a series of informal get-togethers, as I am painting my first filmstrips. During these first meeting, I describe the general structure of video and the characteristics of each of the three parts. Functional questions arise: shall the soundtrack mirror the imagery? Will it anticipate images? Should it react to the images? Is the conclusion ambiguous or resolute? Should the imagery inspire the soundtrack, vice versa, or somewhere in between? We decide that, in general, the imagery will inform the composition’s overall tone. Beyond describing to him how I believed certain forms should sound— I must trust that Vernusky can find the right tones and effects to convey such characteristics.

Just as the visual narrative follows a linear progression, so too does the soundscape, which Vernusky defines as “a beginning point, traveling to another point.” Unlike traditional compositions, this composition does not have a circular or verse-chorus-verse construct. Instead, each section is endowed with unique sonic characteristics. The first part consists primarily of staccato tones that occupy the mid- to upper range of the sonic spectrum: crackles, granular synthesizers, gamelans, white, pink and brown noise. The crackles heard at the very beginning of the video — produced when cables are plugged into a 30-year-old analog synthesizer — are one such example. Is the sound deliberate, or an error? What makes the sound — a needle on a record, or perhaps static electricity? Whatever the connotation, it fosters a sense of anticipation. By the third section, the soundscape shifts into the lower frequency range — deep bass tones emerge, perceived as much with the body as it resonates within the chest cavity as with the ear. As the pace of the visuals slows dramatically, the audio stretches out, tones and effects linger.

Vernusky and I work independently of one another for the most part. Doing so offers the advantage of allowing us each to work at our own pace. The disadvantage is that we meet infrequently and end up working in a vacuum without immediate feedback from one another, which is disruptive when one of us is on a creative roll. On more than one occasion, Vernusky is compelled to stop working because I have not yet assembled sufficient visual material to accompany the audio. This comes to a head when I find myself unable to bridge the terrestrial and atmospheric sections of the video — a two-minute gap in the picture — and the composer, without visual material to guide him, ceases work for two weeks. I eventually resolve this issue to my satisfaction, although I cannot help but wonder if the idle time disrupts Vernusky’s own workflow.

These disruptions and periods of non-communication result in tension between the audio and video, moments when certain sound effects seem to come out of nowhere, or have little connection to the images on the screen. One such instance is found during the transition between subterranean and terrestrial strata, when the white swirls slowly dissolve into the blue/green strip. In this case, the composer articulates the violence of surfacing of arriving at this new stratum far more eloquently than I do with the painted visuals. There is, I think, a moment of discomfort onscreen as the audio and video clash ever so slightly. In other instances, the two elements integrate seamlessly. An outstanding example of this is can be found in the sequence that follows the swirling imagery midway through the terrestrial section. During one of our meetings, I describe this image to the composer as a hot, dry desert, a calm if lonely place. Vernusky responds with a sparse soundscape of low-frequency rumbles and hollow-sounding effects that evoke images of winds blowing through empty canyons.

This tension is, I believe, exacerbated further by the fact that at no point do Vernusky and I rigidly synchronize the audio and video. Instead, Vernusky composes by watching the video and loosely timing his composition to corresponding imagery. During occasional get-togethers, we analyze how the two elements function in tandem. Sometimes, certain tones and effects he employs fit the video to a T; at other times, they are ill placed, but easily integrated elsewhere. At other points, the instrumentation doesn’t work at all, and is promptly removed.

In general, I am quite satisfied with the soundtrack, although I question how equitable a process the collaboration has been. Looking back on the experience, I acknowledge relying less on Vernusky’s score to inspire and motivate the imagery than vice versa. It is to his credit that he is able to subvert his own artistic ego in deference to mine in dictating the overall tone of the video. While he has admitted on occasion that this is a minor frustration, he has never once let it dampen his enthusiasm for the project. And therein lies, I believe, the larger tension between film or video and audio. In a typical narrative, the music serves a decorative purpose; in a music video, the case is often the reverse. To expect that sound and image can inspire one another equally is a tall order indeed. I do not think such interdependence is impossible, but given my experience with "Means and Meditations," I believe it requires working much more closely than Vernusky and I were able to do.

INTERPRETATIONThe video does not exist in a vacuum; it exists in a public sphere, where it offers a discourse to those who see it. The video is a “text” in the sense that it must be “read” in order to be understood, its codes deciphered, its messages received. In turn, these interpretations construct a larger language that shapes the video’s identity in this public sphere.

Messages and codesIn "Image - Music - Text," Roland Barthes argues that the motion picture and other visual media such as painting carry two distinct messages: denotative and connotative. As with the photograph, the denotative messages (a woman, a house, a knife, and so on) are straightforward. Unlike the photograph — a singular moment in time, forever frozen — the motion picture moves, the singular images interact with one another and take on symbolic meaning. The woman (in this example, Maya Deren in "Meshes of the Afternoon") confronts her doppelganger in a house; the knife becomes a weapon she uses to slay double. To interpret and comprehend coded visual communication like this film, Barthes might say, the viewer is compelled to draw upon on certain knowledge and experiences, both individual and collective. Indeed, he concedes, even the representative photograph carries a code embedded within that the viewer must unravel in order to fully comprehend.

But what of the abstract moving image? What are its codes and messages, and might a viewer be able to comprehend an abstraction using the methodology Barthes describes for photography and traditional cinema? The code embedded within my video is the visual vocabulary compiled in the process of creating the video and its accompanying imagery. Only I know it precisely; it is a code of organization (as opposed to interpretation). The message (or theme) is of a journey, or transition, between three planes. But what of the viewer, who is not privy to such knowledge (and highly unlike to encounter this report prior to viewing the video)?

In traditional cinema, Barthes writes, imagery is bound with uncertainty concerning the meaning of objects. This uncertainty is resolved by drawing upon knowledge and experience. There exists a generally accepted code of cinematic communication that most film employs and most viewers understand. Elements of this code may govern structure, such as the Kuleshov effect, which states that shot A, when combined with shot B, will produce a third meaning, C. Or, they may govern narrative and genre, such as the timeless dramatic conventions of man versus man, man versus nature, man versus himself, etc. Traditional cinema may play with these codes (the non-linear structure of Quentin Tarantino’s "Pulp Fiction" or symbolism of Maya Deren’s "Meshes of the Afternoon"). But many viewers, at least those familiar with Western cinema, will expect films to have a (reasonably) satisfying narrative conclusion, that a musical soundtrack is designed to codify our emotional experience, and that the imagery will contain individuals, famous or otherwise, who behave with a prescribed set of characteristics we all will be able to identify with.

With abstract imagery, this uncertainty is exacerbated. There is no representational imagery, few familiar figures, no dialogue or text (apart, perhaps, from the title); even a melodic soundtrack is absent. In other words, the audience is denied many of the traditional cues of cinematic comprehension. Without these elements to answer the question “What is it?” — much less provide the viewer with what Barthes refers to as a “correct level of perception” (in traditional cinema, a clearly identified protagonist, for example) — the responsibility for interpretation and comprehension is the viewer’s burden. They must draw on their own knowledge (experiential, cultural, historical and most importantly aesthetic) to invest in images like the painted 35mm filmstrip resembling a cosmic nebula. This in turn evokes emotions, trigger memories and conjurer internal narratives with each subsequent viewing. What sort of knowledge might assist a viewer in watching and interpreting my video? An understanding of abstract art helps, even if it is only the ability to recognize it as a particular style or movement (something not particularly difficult in an age when posters of Mondrian’s and Picasso’s paintings are sold at the local mall). An appreciation of experimental film and video helps, too (again, not difficult in an age when film festivals are held in nearly every city). People generally acknowledge that there are many ways of comprehending a narrative beyond that of conventional filmmaking

In doing so, each person creates a personal lexicon. This lexicon functions as a narrative architecture, an interpretive template to superimpose on the apparently unarticulated lexicon of the filmmaker. But like anything that is grounded in the textual world, any given lexicon is not the finite expression of the language of the image. While some elements can be readily articulated (“cosmic”), other elements may elude description, or those descriptions may be poor substitutes for the immediate experience itself. Likewise, there is no single lexicon that can (or even should) be developed for this video; any given individual can generate multiple readings of a single entity. Furthermore, as Barthes notes, a complete language of the image includes the sum total of these lexia as they accumulate over time, as well as allows for random chance to inform the language as well.

The image as textIt is the abstract image’s limitless capacity to convey meaning and receive interpretation that makes it so beautiful to work with and to appreciate. “An absence of meaning full of all meanings,” as Barthes put it. But he questioned whether purely “naïve” (literal) imagery could exist, at least in the realm of photography. Can such “naïve” images be found in the abstract? Doubtful. In contemporary American society, the abstract image is nearly as ubiquitous as the advertising photograph. Abstractions can be found in the graphics that ESPN superimposes on the sporting events it televises, and in the logos of Fortune 500 companies like Lucent Technologies (as well as the advertising these companies generate). Such imagery may not be posses the clear articulation of the advertising photograph, but it harbors intentions nevertheless. Barthes, I believe, would argue that the abstract image is no different than the advertising photograph — both contain denotative and connotative meaning that can be received and understood, and both are subject to the dictates of the text, figurative and literal.

There is, first, the literal text, embodied by the title itself. Even placed at the video’s conclusion, the title is loaded with meaning in its own right. Beyond that (especially in the worlds of the film festival and gallery show) lie the printed synopsis, the artist’s statement, the curator’s remarks, the critic’s assessment. The mere mention of the artist’s name in relation to a given work can conjure up associations and expectations. All of these point the viewer toward certain signifieds and away from others, intended or not. This repressiveness of the text, as Barthes describes it, places the video in a straitjacket, no matter how much I may want the abstract imagery to exist in a space devoid of implied or suggested meanings.

What does the title, "Means and Meditations" imply? Using the dictionary as a guide, the word “means” can be defined in several ways; it can convey or denote; it can signify or represent; it can convey or indicate; it can have as a purpose or an intention; it can destine for a certain purpose or end; it can have as a consequence; bring about; or it can imply value. In this case, it is primarily expressing a purpose, that of meditation. What of “meditations”? Again, the dictionary offers ground rules: first, it suggests repeated acts of meditating. What is meditating? A devotional exercise of or leading to contemplation; it is also a contemplative discourse, usually on a religious or philosophical subject. In this case, it functions as the former. The title, then, serves an anchor, linking the image to meaning. Given the abstract nature of the imagery, the title likely will be questioned as the video is interpreted. Does one meditate upon the images upon the screen? Who is doing the meditating? What if one does not, or cannot, meditate? Ultimately, each person will bring their own connotations to the video, which in turn will generate the questions they must ask if they are willing to interpret and understand what they see.

There exists figurative text beyond the literal. These are image and sound, and they can be similarly read. Because both are abstract, they lack a clearly denotative message. There is no woman, no knife (as with "Meshes of the Afternoon," for instance) to readily identify in my video, only color and form. Despite being an apparent blank slate, there exists a general cultural knowledge that can inform interpretation. For example, we live collectively in what once was referred to quaintly as “the space age” — satellites, rockets, and the moon landing are no loner the stuff of science fiction. As such, most of us have at some point or another seen a Landsat satellite image of the earth, or a picture of a distant galaxy taken by a long-range telescope. There is, given this general cultural familiarity, a good chance that the portion of the video I refer to as a “nebula” may evoke similar cosmic connotations for someone else.

The soundtrack offers clues as well. We live in a mechanized society, surrounded by the noise of machines. Powerful internal combustion and rocket engines emit deep bass rumbles. Computers and other electronic devices hum and whine. The crackle heard at the very beginning of the video may evoke the nostalgic image of a record player, or perhaps the static of a faraway radio station. Sounds such as these can evoke stronger associations when coupled with images. The low-frequency tones at the end of the video, in tandem with the cosmic imagery, may suggest a rocket ship traveling through the deepest reaches of space (something anyone who has seen a science fiction film can relate to). The tones heard at the beginning of the blue-green terrestrial imagery (which contain digitally altered recordings of a faucet dripping into a sink), might imply wetness or moisture. Like the imagery, these audio clues will be mediated by the viewer’s individual and collective knowledge and experience.

Although I am conveying a narrative through use of abstract imagery, the imagery constitutes a language nevertheless. It is a familiar language, too, one that has infiltrated the popular culture via the paintings of Jackson Pollock and the psychologist’s Rorschach ink blots (to cite but two examples). It is because of this cultural familiarity with abstraction that "Means and Meditations" can be read and understood as a more representation film or photograph might. It is abstraction’s affinity to convey and receive any number of meanings and interpretations that makes it so personally enriching.

INFLUENCESEvery artist, no matter how original, is both influenced by and reacts against precedents established by artists before him. My initial film education at the University of Colorado exposed me to a canon of classic postwar American experimental films by Maya Deren, Stan Brakhage, Kenneth Anger, Bruce Connor and Hollis Frampton, to name but a few. These filmmakers exerted a profound influence on my earliest work and taught me to view the ordinary in extraordinary ways. At the University of Texas, I continued to learn about experimental filmmakers, among them Jordan Belson, James and John Whitney, Scott Bartlett and Dan Sandin.

Brakhage has the greatest influence on me. His handpainted films have long appealed to me as both a maker and aficionado of experimental film. When I was a student at Colorado, Brakhage was a larger-than-life presence among the faculty. I had the opportunity more than once to watch as he optically printed his painted filmstrips. My initial efforts at painting on film are attempts to emulate the techniques Brakhage used. Other artists, particularly Bartlett and Sandin, exert an indirect influence on my work. Bartlett’s 1967 film "Off / On" remains a landmark for its merging of filmed and videotaped footage into a seamless abstract narrative. It possesses a visual richness and energy that I have yet to encounter in another films; it moves me in a way few other can. Sandin, who worked exclusively with videotape and analog video processors in the 1970s and ’80s, remains of the few artists to fully exploit the merger of video and computer technology and how the two in tandem could fundamentally alter everyday imagery. Seeing their work encourages my own experiments in combining analog film and digital video technologies.

Today, experimental film and video artists continue to work with abstract imagery, carrying on the traditions established 30, 40, even 50 years ago. Some, like Barbel Neubauer of Germany ("Moonlight") and Richard Reeves of Canada ("Linear Dreams"), scratch or paint directly on the filmstrip, much like Brakhage, Norman McLaren and Harry Smith did in their work. Dutch filmmaker Joost Rekveld uses film to explore the abstract qualities of light itself ("#23.3," "Book of Mirrors"), while Austrian filmmaker Anna Krautgasser ("Rewind") works strictly in the digital realm, creating computer-generated abstractions set to electronic music composed by DJs. At the same time, these contemporary artists are moving out of the film festival circuit and gallery space and into newer venues, including raves and digital media forums.

Local institutions such as the Austin Museum of Digital Art and international organizations such as the iota Center in Los Angeles and ARS Electronica in Germany guarantee that abstract experimental film and video will continue to have a home in the future.Where does my work fit within this vast historical and contemporary body of work? I am clearly influenced by historical precedent and incorporate some historically documented techniques, yet my work differs dramatically in other aspects. Although some of my source material relies on the filmstrip as its base medium, not a single frame is manipulated through rephotography as in the case of Brakhage. I also do not rely solely on the filmstrip as image-receiver as do McLaren and Reeves. My work is not generated entirely within the computer, as is the work of digital pioneer John Whitney and contemporary artist Krautgasser.

My work evolves within and between these analog and digital worlds. I am using existing video technology in unconventional ways to manipulate imagery that has been inserted into a digital environment. While Final Cut contains many of the same properties and effects of more traditional video editing software, it is not generally regarded or even marketed for such capabilities. Dan Sandin’s work of the 1970s and ’80s is a forebear, too, made using bulky analog video processors — which relied on a tangle of patch cables, knobs and buttons to process the video signal — and primitive digital computers that have less processing power than a cell phone or Palm Pilot today. Likewise, the miniDV and DVD technology on which these consumer-grade video editing software packages are based has a far greater resolution and fidelity than the analog video technology of his day. It is in occupying this unique middle ground that my video stands apart from my predecessors and contemporaries. It simultaneously takes its cues from the past and the present.

CONCLUSIONDocumenting the creation of this video has given me a newfound appreciation for the process of making and how it informs the finished product. I have become more critical of my artistic choices, which in turn I believe makes me a more conscientious creator. Treating each element as a precious commodity and critically assessing its worth has given this video a focus that has been lacking in many previous projects. It is my most deliberate, methodical and articulate work to date. I cannot speak to whether or not the video will enjoy any measure of critical or popular success, but in some ways, it has already succeeded by surviving the long gestation period and emerging whole in the world. This is, in itself, an accomplishment.

Thursday, August 14, 2008

In "Human Cognition and Social Agent Technology"Ed: Kerstin Dautenhahn, John Benjamins Publishing Company.

1. Introduction

My intention in this essay is to discuss agent building from the perspective of the visual arts. I will argue for the value of artistic methodologies to agent design. I will not advance some futuristic version of the romantic bohemian artist, agonising over an expressionistic agent in his garret. Nor will I propose the harnessing of artistic minds to the industrial machine. I want to advance another argument which is pertinent specifically to the building of Social Agents. I propose that there are aspects of artistic methodology which are highly pertinent to agent design, a which seem to offer a corrective for elision generated by the often hermetic culture of scientific research.

When one mentions the uses and functions of art in a scientific context, the understanding is often of superficial manipulation of visual `aesthetic' characteristics in the pursuit of `beauty' or a cool-looking demo. A more sophisticated approach recognises that the holistic and open ended experimental process of artistic practice allows for expansive inventive thinking, which can usefully be harnessed to technical problem solving (this has been the MIT Media Lab position). This approach tacitly recognises that certain types of artistic problem solving compensate for the `tunnel vision' characteristic of certain types of scientific and technical practice.[1]

I have observed previously that the approach to the production of artworks by the scientifically trained tends to be markedly different from the approach of those trained in the visual arts. A case example is the comparison of two works which I included in the Machine Culture exhibition at SIGGRAPH 93 [2]. The Edge of Intention project by Joseph Bates and the Oz group at Carnegie Mellon University was an attempt to construct a knowledge base of plot structure and character development by distilling English literature and drama. Although the project had been in progress for several years, the developers admitted that it was still in its infancy. The audience experience at present was somewhat simplistic: the user (incarnated as one of the agents) could play childlike games (chasing and hiding, etc) with a group of cartoon entities which resembled moody jelly beans. The goal of the group was not to produce agents which were simulations of people, but which were `believable' in their own terms. This `believability' implies an abstraction of what we perceive to be intelligent behavior.

Luc Courchesne's Family Portrait, on the other hand, was comparatively low-tech. It consisted of four stations: four laserdiscs, four monitors and four Macintosh classics each with a simple HyperCard stack. The users stood and chatted with interactive video images. Although the interface consisted of using a trackball to choose responses to questions posed by the characters on the screen, the simulation of human interaction was uncanny. The artist has great finesse at simulating human interaction in the social space of the interface, a skill I have called interactive dramaturgy. A particularly effective trick was that the four virtual characters would occasionally break their conversation with the visitors, and turn to interrupt or contradict each other. This illusion of `real people' was aided by the handling of the hardware. The computer and AV hardware was hidden, even the monitor was hidden, the images were reflected in oblique sheets of glass in the darkened space, and seemed to float. Though low tech, Family Portrait was dramatically persuasive in a way that Edge of Intention was not.

The difference in approach of these projects illustrates my argument. One might generalise in this way (with apologies to both groups): artists will kluge together any kind of mess of technology behind the scenes because the coherence of the experience of the user is their first priority. Scientists wish for formal elegance at an abstract level and do not emphasise, or do not have the training to be conscious of inconsistencies peoplesin, the representational schemes of the interface. Arising from the tradition of Artificial Intelligence, the Edge of Intention project seeks to create general tools for an interactive literature by analysing the basic components of (rudimentary) social interactions, and building a system for their coordination. The focus of the effort was to build an elegant and general internal system. The interface seemed to be a necessary but secondary aspect, like the experimental demonstration of a proof. The average user, however, will never gain access to the hermetic territory of the architecture of the code, and remains frustrated by the unsatisfying and incomplete nature of the representation of the system in the interface. Courschesne, on the other hand, does not attempt to build a general purpose system, but presents a seamless and persuasive experience for the user. Artists are trained to understand the subtle connotations of images, textures, materials, sounds, and the way various combinations of these might conjure meaning in the mind of the viewer. Artists must be concerned with the adequate communication of (often subtle) ideas through visual cues. They understand the complexity of images and the complexity of cultural context. Of course, the artistic solutions are often highly contingent and specific to a certain scenario, and may not generalise to general principles for a class of scenarios. This is not their goal.

While more academic disciplines valorise and reward a `hands-off' approach, rewarding the more purely theoretic, artists are taught to integrate the artisanal and the conceptual (Penny, S. 1997). Artistic practice is the shortest route to the physical manifestation of ideas. According to the traditional view, properly trained, the manual skill of the artist becomes an automatic conduit for the expression of abstract thought. Purely perceptuo-motor and abstract conceptual process are combined. Artists are judged on the perceived performance of a physically instantiated work, not on the coherence of a theory which may be demonstrated, perhaps obscurely. Criteria for a successful work is based almost solely on its influence on the viewer. An artwork must motivate the viewer to engage intellectually and emotionally with the work. In a good work, the `interface' is finely honed and engagement should develop over the long term through.

This condition of engagement is a paradigmatic case of what Jonathan Crary calls the `techniques of the observer' (Crary, J. 1992). In the book of the same name, Crary argues that pictures would remain meaningless and mute without the unconscious and uncelebrated training of observers, as a cultural group. We are all trained in how we look at and appreciate pictures.[3] The meaning of a work is negotiated by the observer in the moment of looking. Meaning is construed entirely as a result of the observers' cultural training.

A salutary example of the cultural specificity of this training is the history of depiction of `new lands' by colonising peoples. Take for instance the depiction by the British colonists of Australia in the closing years of the 18th century and later. Almost invariably in these pictures, aboriginals look negroid, eucalypts look like elms, kangaroos look like giant pudgy mice and the Australian bush looks like rolling English countryside. It took over 100 years until painters captured the quality of the Australian light. This example demonstrates that what we see depends to a great extent on what we have been trained to see. We extrapolate from our previous experience to explain our new perceptions.

Over the past decade, my artistic practice has developed from the construction of sensor driven interactive installations to systems with at least rudimentary forms of agency. My focus of interest has been for several years what I call the 'aesthetic of behavior', a new aesthetic field opened up by the possibility of cultural interaction with machine systems. I have the luxury of being able to experiment with the modalities of systems, without being constrained by an externally specified task for the system. A secondary interest arising from the first is the potential application of various `Alife' techniques as artistic tools, producing artworks which demonstrate behaviors which go beyond a `locked-down' state machine model. This combination of interests leads me inevitably into agent design. My background in art predisposes me to integrated, holistic, situated and embodied practice (both by the maker and in the agent).

In my own practice I tend to define the envelope of the problem first: the system has to do this on these occasions in this way, it has these physical constraints, this power limitation, etc. From these specification I work slowly inward from desired behavioral to physical structure to specifics of sensing and actuation, often specifying hardware first, eventually arriving at the set of constraints within which the code must function. Contrarily, computer scientists have a tendency to look briefly at the surface level, identify a `problem' that might respond to a rule-based solution, then dive deep into the abstractions of code at the most conceptual level, building the ramifications of a conceptual design up through the more abstract to the more `mechanical' aspects of the code, finally surfacing to look back at the interface and see if it works. This approach results in fragmentary and inconsistent interfaces.

These are some of the values which I bring into my robotic and agent practice. These positions bring me close to many already established in Cybernetics and in critiques of traditional AI which concern themselves with groundedness, embodiment, situated cognition and emergent behavior, as discussed by Brooks, Cariani, Dreyfus, Johnson, Varela, et al. (Brooks, R 1991; Dreyfus, H 1992; Johnson, M 1987; Varela, F. Thompson, E. and Rosch, E. 1993) By the same token, my training steers me away from the sensibilities of symbolic AI approaches.[4] In the following text I will discuss four recent works as examples of the way these positions arise or are applied.

2. Petit Mal

The goal of the project Petit Mal: an autonomous robotic artwork was to produce a robotic artwork which was truly autonomous; which was nimble and had `charm'; that sensed and explored architectural space and that pursued and reacted to people; that gave the impression of intelligence and had behavior which was neither anthropomorphic nor zoomorphic, but which was unique to its physical and electronic nature (see Plates 1 and 2). Petit Mal was conceived in 1989, construction began in 1992. Since its public debut in February 1995 it has proven to be reliable and robust, it has been shown in many festivals where it must interact with the public continuously for 8 hour days, for weeks at a time.

It was not my intention to build an artificially intelligent device, but to build a device which gave the impression of being sentient, while employing the absolute minimum of mechanical hardware, sensors, code and computational power. The research emerged from artistic practice and was thus concerned with subtle and evocative modes of communication rather than pragmatic goal based functions. My focus was on the robot as an actor in social space. Although much work has been done in the field of screen-based interactive art, the `bandwidth' of interaction in these works is confined by the limitations of the desktop computer. I am particularly interested in interaction which takes place in the space of the body, in which kinesthetic intelligences, rather than `literary-imagistic' intelligences play a major part. I conceive of artistic interaction as an ongoing conversation between system and user rather than the conventional (Pavlovian) stimulus and response model.

Acknowledging that there is no canon of autonomous interactive esthetics, Petit Mal is an attempt to explore the aesthetic of machine behavior and interactive behavior in a real world setting. Every attempt was made to avoid anthropomorphism, zoomorphism or biomorphism. It seemed all too easy to imply sentience by capitalising on the suggestive potential of biomorphic elements. I did not want this `free ride' on the experience of the viewer. I wanted to present the viewer with a phenomenon which was clearly sentient, while also being itself, a machine, not masquerading as a dog or a president.

I wanted to build a device whose physiognomy was determined by brutally expedient exploitation of minimal hardware. The basic requirements of navigation and interaction with humans determined the choice of sensors. The suite of sensors is absolutely minimal: three ultrasonics, three pryo-electrics, two very low resolution encoders and a low-tech accelerometer. The dicycle design offered the most expedient motor realisation for drive and steering but demanded a low center of gravity to ensure stability. This swinging counterweight would have caused the sensors swing radically, looking first at the ceiling then at the floor, so the sensors were mounted on a (passively stabilising) second internal pendulum. In this way the structure specified the necessary extrapolations to itself, the development of the mechanical structure was not a gratuitous design but a highly constrained and rigorous engineering elaboration based on the first premise of two wheeled locomotion. The lower or outer pendulum carries motors, motor battery and motor drive electronics, the inner pendulum carries the sensors at the top and processor and power supplies as counterweight in the lower part. The batteries are not dead weight but in both cases also function as the major counterweights. In an analogy to the semi-circular canals of the inner ear, an accelerometer at the pivot of the inner pendulum is a rudimentary proprioceptive sensor, it measures relationships between parts of the robot's `body'. It was important to me that this robot was `aware' of its body.

From the outset I wanted to approach hardware and software, not as separate entities but as a whole. I wanted the software to `emerge' from the hardware, from the bottom up, so to speak, The code would make maximal utilisation of minimal sensor data input. Petit Mal has had four successive sets of code, each increasingly more subtle in its adaptation to the dynamics of the device and more effectively exploiting the minimal processor power (one 68hc11). My approach has been that a cheap solution (in labor, money or time) to a particular problem which was 70% reliable was preferable to a solution which was 90% reliable but cost several times as much. It was pointed out to me by an engineer that my `under-engineering' approach could lead to a much wider range of possible (though unreliable) solutions. The field of possibility is thereby expanded. Eventually such solutions could be refined. He was of the opinion that this approach could lead to better engineering solutions than an approach which was hindered by a requirement of reliability in the research phase.

In robotics circles one hears the expression `fix it in software' applied to situations when the hardware is malfunctioning or limited. This expression is emblematic of a basic precept of computer science and robotics, the separation of hardware an software and the privileging of abstract over concrete. I attempted, in Petit Mal, an alternative to this dualistic structure. I believe that a significant amount of the `information' of which the `intelligence' of the robot is constructed resides in the physical body of the robot and its interaction with the world.

A `Petit Mal' is an epileptic condition, a short lapse of consciousness. The name was chosen to reflect the robot's extremely reactive nature, Petit Mal has essentially no memory and lives `in the moment'. My approach has been that the limitations and quirks of the mechanical structure and the sensors are not problems to be overcome, but generators of variety, the very fallibility of the system would generate unpredictability. My experience has shown that `optimization' of the robots behavior results in a decrease in the behaviors which (to an audience) confer upon the device `personality'. In sense then, my device is `anti-optimised' in order to induce the maximum of personality. Nor is it a simple task to build a machine which malfunctions reliably, which teeters on the threshold between functioning and non-functioning. This is as exacting an engineering task as building a machine whose efficiency is maximised.

2.1 Behavior, interaction, agency

The example of Australian colonial painting (cited above) is pertinent in the explanation of peoples' behavior toward Petit Mal and the way it will change. Almost invariably, people ascribe vastly complex motivations and understandings upon Petit Mal, which it does not possess. Viewers (necessarily) interpret the behavior of the robot in terms of their own life experience. In order to understand it, they bring to it their experience of dogs, cats, babies and other mobile interacting entities. In one case, an older woman was seen dancing tango steps with it. This observation emphasises the culturally situated nature of the interaction. The vast amount of what is construed to be the `knowledge of the robot' is in fact located in the cultural environment, is projected upon the robot by the viewer and is in no way contained in the robot. The clear inference here is that, in practical application, an agent is first and foremost, a cultural artifact, and its meaning is developed, in large part, by the user and is dependent on their previous training. This means that, in the final analysis, an agent is a cultural actor, and building an agent is a cultural act. Here the rarefied and closed proof system of science is ineffably forced into engagement with the world.

Such observations, I believe, have deep ramifications for the building of agents. Firstly, any effective agent interface design project must be concerned with capitalising on the users' store of metaphors and associations. Agents work only because they trigger associations in the user. So agent design must include the development of highly efficient triggers for certain desired human responses. In his painting Ceci n'est pas un pipe, Rene Magritte encapsulated the doubleness of symbols and the complexity of representation. This doubleness can be used to good effect in agent design: a very simple line drawing (of a pipe, for instance) triggers a rich set of associations in the user. However, for the same reasons, these associations, like any interface, are neither universal nor intuitive, they are culturally and contextually specific.

Another curious quality of Petit Mal is that it trains the user, due to their desire of the user to interact, to play; no tutorial, no user manual is necessary. People readily adopt a certain gait, a certain pace, in order to elicit responses from the robot. Also, unlike most computer-based machines, Petit Mal induces sociality amongst people. When groups interact with Petit Mal, the dynamics of the group are enlivened. Readers from the agent research area might wonder at this point if the systems I describe might be appropriate for various sorts of application domains. I would respond: `probably not', nor is this my goal. I am interested in the modalities of interactive systems as new cultural environments. And I would reiterate my argument that because I am able to experiment without the constraint of total reliability or a pragmatic work-oriented goal, I can open up a wide field of possibilities, some of these possibilites may ultimately have application or relevance in pragmatic applications.

3. Sympathetic Sentience

Sympathetic Sentience is an interactive sound installation which generates complex patterns of rhythmic sound through the phenomenon of 'emergent complexity'. Sympathetic Sentience is an attempt to build a physically real model of emergent complex behavior amongst independent units, which produces constantly changing sound patterns. As with Petit Mal, there was an interest in designing the most technologically minimal solution, in this case, for a system which would demonstrate persuasively `emergent' behavior.

Each of the 12 comparatively simple, identical electronic units alone is capable of only one chirp each minute. Rhythmic and melodic complexity develops through a chain of communication among the units. In the installation, each unit passes its rhythm to the next via infrared signal. Each unit then combines its own rhythm with the data stream it receives, and passes the resulting new rhythm along. Thus the rhythms and timbral variations slowly cycle around the group, increasing in complexity. The system is self-governing, after an initial build-up period, the system is never silent nor is it ever fully saturated.

The 12 units are mounted on the ceiling and walls of a darkened room. The experience of the visitor is of an active sound environment of 12 'channels' in which there is recognisable, but not predictable, patterning. The visitor can interrupt this chain of communication by moving through the space. This results in a suppression of communication activity and hence reduction of complexity. A long interruption results in complete silencing of the whole group. When the interloping is removed, slowly a new rhythm will build up. The build-up of a new rhythm cycle can take several minutes. The rhythm cycles are never constant but continually in development. To gain a sense of the full complexity of the piece, it is necessary to spend several minutes with the piece in an uninterrupted state.

3.1 Technical Realisation

Several iterations of the work have been built. Sympathetic Sentience One was built entirely in hardware logic (TTL ICs). The basic premise is extremely simple: each unit is receiving, processing and forwarding a continuous stream of data. Each unit 'edits' that stream 'on the fly', adding or omitting an occasional bit. This editing is done in such a way that the 'density' of the sound is 'self-governing'. The critical part of each unit is an exclusive OR gate. On each unit, the signal is received by an IR receiver, demodulated and sent to a shift-register (delay). Emerging from the delay it meets a feed from the on-board oscillator at the exclusive OR gate. The signal emerging from the gate goes to both the IR emitter and the audio amplification circuit. The units communicate in modulated infrared signals using hardware similar to that used in TV remote controls.

While in Sympathetic Sentience One, only the rhythmic patterns were subject to change through the emergent complex behavior, in Sympathetic Sentience Two, other sound characteristics such as pitch and envelope are also subject to gradual change through the emergent complex process. To achieve this, Sympathetic Sentience Two uses small microprocessors (PICs) to replace the hardware logic.

3.2 Emergence

Whether this behavior is deemed to be `emergent' is a matter of previous experience. Most visitors find it reminiscent of the sound of communities of frogs, crickets or cicadas. But to at least one rather dry observer, it was simply a chaotic system of a certain numerical order. To another is was a demonstration of one model of neural propagation. Here emergence would seem to be `in the eye of the beholder'.

The term `emergence' seems to be defined rather loosely, even in scientific texts. In some cases it is applied to the interaction of two (or more) explicit processes which result in a third `emergent' process which was, however, entirely intended. Similarly, the fitness landscape of Stuart Kauffman establish a desired end condition (Kauffman, S. 1993). This would seem to be a rather different and narrower sense of emergence than that of the termite community, though attempts to reproduce such behavior in programmable models, such as the Stigmurgic multi-robot systems of Beckers, Holland and Deneubourg, reduce the complex interactions to deterministic events (Beckers, R. Holland, O. Deneubourg, J. 1994) The paradigmatic `emergent' systems are the development of the mind/brain and the process of genetic evolution. The difference here is that these systems are open ended, goal states are not specified.

4. Fugitive

Fugitive is a single user spatial interactive environment. The arena for interaction is a circular space about 10m dia. A video image travels around the walls in response to the users position (see Plate3). This is the simplest level of interactive feedback: the movement of the image, tightly coupled to the movement of the user, is an instantaneous confirmation to the user that the system is indeed interactive. The behavior of the system is evasive, the image, in general, runs away from the user. The user pursues the image. Over time the response of Fugitive becomes increasingly subtle and complex (constrained by the need to be `self-teaching', to continually more or less make sense to the user). A user must spend almost 15 minutes to get through the full seven chapters and elicit the most complex system responses.

The user is totally unencumbered by any tracker hardware, sensing is done via machine vision using infra-red video. [5] The space is lit with 13 infra-red floodlights. User tracking is achieved via a monochromatic video camera mounted vertically upwards, looking into a semi-circular mirror suspended in the center of the room. Preliminary vision processing occurs on a PC. Two streams of serial data are output. Simple angular position data is sent to the custom PID motor control board to drive the projector rotation motor. Values for MAE calculations are sent to the MAE2 (Mood Analysis Engine2) running on an SGI 02 computer. On the basis of this calculation, the VSE (Video Selector Engine) selects, loads and replaces digital video on a frame by frame basis. Video data is fed to the video projector.

The user is engaged in a complex interaction with the system. The basic logic of interactive representation in Fugitive amounts to this: user movement is represented by camera movement within the image, and image movement across the wall. The segwaying of image content and its physical location is the `expression' of the system. The output of the Mood Analysis Engine controls the flow of digitised video imagery in such a way that no two people walking the same path in the installation will produce the same video sequence, because their bodily dynamics are different. The system responds to the dynamics of user behavior and their transitions over time. Ideally, the system responds not simply to changes in raw acceleration or velocity or position, but to kinesthetically meaningful but computationally complex parameters like directedness, wandering or hesitancy. This is achieved in a multi-stage process of computationally building up the complexity of parameters. The input level data from the vision system is limited to raw position in each frame. From this, simple values for velocity and acceleration are calculated. A third level of more complex parameters is then constructed: average acceleration over various time frames, variance and so on. Finally, values for various combinations of these parameters are used to determine the entry and exit points for `behaviors' which are matched to video selections.

The images do connect with some small degree of semantic significance, there is a minimal hypernarrative, but characterisation and plot structure were explicitly avoided. The chosen imagery is lanscape, each `chapter' being a specific location at a specific time of day. An hypertextual structure and a logic of transition links one `chapter' or location with the next. As time progresses, the user propels themselves through seven location chapters. A formal garden sequence is a kind of `vestibule'. You go there at the beginning and return there between each chapter. When you got the center the projector slowly rotates and shows you a series of archways. You choose to set out of the center (metaphorically through one of the archways) and you make the transition into a new chapter. This is the only case in which particular imagery is connected with a specific location in the room. When you have explored the chapter adequately (as determined by the system), you transition back into the 'garden'. All other video material is located `temporally' and triggered dynamically rather than positionally. This reinforces the continuity of body and time, against the continuity of an illusory virtual space. The output of the system is completely free of textual, iconic, or mouse/buttons/menus type interaction.

In building Fugitive, my concern was with the aesthetic of spatial interactivity, a field which I regard as being minimally researched. Watching spatial interactives over several years, I was frustrated by the simplistic nature of interaction schemes based on raw instantaneous position and simple state-machine logic. I wanted to produce a mode of interactivity which did not require the user to submit to a static Cartesian division of space (or simply groundplane). I wanted to make an interactive space in which the user could interact with a system which `spoke the language of the body', and which critiqued VR and HCI paradigms by insisting on the centrality of embodiment. I wanted to develop a set of parameters which could be computationally implemented, which truly reflected the kinesthetic feeling of the user, their sense of their embodiment over time. Fugitive is an attempt to build an entirely bodily interactive system which interprets the ongoing dynamics of the users body through time as an expression of mood. I called this part of the code (somewhat tongue-in-cheek) the Mood Analysis Engine.

4.1 Immersion and Embodiment.

One of my 'covert' goals was to critique the rhetoric of immersion in VR by building a system which continuously offers and collapses such an illusion. The last decade of rhetoric of virtualisation probably leads users to expect or hope for some kind of immersion in a coherent virtual world. Fugitive explicitly contradicts this expectation by setting up temporary periods in which the illusion of immersion is believable, and then breaking the illusion. If the user moves in a circumferential way, the illusion of a virtual window on a larger world is created. As you move, say, to the left around the perimeter, you will see a pan as a moving `virtual window'. As you continue it will segway into another pan. If you reverse your direction, the same pan will occur in reverse, but when you get to the beginning of pan2, you segway to pan3, not pan1. In this way the illusion of a virtual world seen through a virtual window, is collapsed. [6]

In conventional systems, the illusion of immersion is positional, the absolute position of the tracker (etc) corresponds to a specific location in the virtual world. Such a virtual world, a machinic system, maintains a rather repressive continuity: the continuity of the illusory architectural space. In Fugitive, the continuity of the system is a phenomenological one focused on the continuity of embodiment, not the instrumental one of a consistent virtual space in which the body is reduced to little but a pointer. Fugitive is not positional, the primary and structuring continuity is the deeply subjective continuity of embodied being through time.

4.2 Embodied looking: imagery as the voice of the agent.

Fugitive is about the act of looking, embodied looking, and it is about the metaphorisation of looking via video. The title `Fugitive' emphasises the evanescence of the experience of embodied looking. The attempt is, rather perversely, to avoid eliciting the kind of externalised interest in imagery and subject matter which one has when looking at a painting. This is because to goal is always to fold the attention of the user back onto their own sense of embodiment and the functioning of the system in relation to their behavior. Fugitive is not primarily a device for looking at pictures (or video), it is not a pictorial hyper-narrative. It is a behaving system in which the video stream is the `voice' of the system.

I want the user to see `through' the images, not to look only at the `surface' of the images. Strictly speaking, this meant I should choose imagery that was inherently uninteresting. The exercise is of course fraught with paradox, especially for the scopically-fixated viewer. The user is presented with a darkened circular space the only changing feature of which is a changing image, and yet the user is encouraged to understand the image primarily as an indicator of the response of an otherwise invisible system.

4.3 The Auto-pedagogic Interface

An interactive work is a machine, and one must learn to operate a machine. But visitors to artworks are seldom previously trained. Although prior training has become a part of theme park amusements, nobody wants to do a tutorial or read a manual before they experience an artwork. Nor do I find it acceptable for the user to have to don `scuba gear' (to borrow Krueger's term) before entering the work. A user should be able to enter unencumbered by special clothing or hardware. So a central issue in interactive art is managing the learning curve of the user. One solution is to make a work is so simple in the dynamics of interaction that it is easy to understand but immediately boring. Alternatively, works can be so complex that the average user cannot discern the way in which they are controlling or effecting the events, it appears random. In avoiding these two undesirables, the artist must either choose a well known paradigm (such as monitor-mouse-buttons or automobile controls) or if one desires the modalities of an interface which is novel, then the user must be trained or the system must teach the user.

I cannot endorse the concept of the `intuitive' interface because it implies a naive universalism and an ignorance of cultural specificity, aspects of which I noted in my discussions of `techniques of the user' and colonial painting. In Petit Mal I discovered that if the user is propelled by a desire to interact, that learning will occur in an unimpeded and transparent way. In Fugitive, I attempted to formally produce this effect in a much more complex system. Such an `auto-pedagogic' interface must present itself as facile to a new user, but progressively and imperceptibly increases in complexity as the familiarity of the user increases. Transitions to higher complexity should be driven by indicators of the behavior of the user.

In the current implementation of Fugitive, in order to ensure that the 'interface' be 'auto-pedagogic', the system exhibits only two behaviors at the beginning. Others are introduced along the way, and control of transitions becomes more complex. In future implementations, system behavior will be more `intelligent', as an agent which learns and expresses certain `desires'.

4.4 Poetics of interaction

The degree to which the changes in output are interpreted by the user as related to their behavior is a key measure of the success of any interactive system. Ideally, changes in the behavior of the system will elicit changes in the users behavior, and so an ongoing 'conversation' rather than a chain of 'Pavlovian' responses will emerge. Art artwork is by definition not literal or didactic, it is concerned with poetic and metaphoric associations. So an interactive artwork should not simply tell you something like `you have mail'. Nor would it be interesting if Fugitive told you: `you just moved two paces left'. The goal is to establish a metaphorical interactive order where the user's movement `corresponds' to some permutation of the output. It is all to easy to produce a system which the user cannot distinguish from random behavior. The designer must successfully communicate that the user is having a controlling effect on the system and at the same time engage the ongoing interest of the user with enough mystery. One hopes for some poetic richness which is clear enough to orient the user but unclear enough to allow the generation of mystery and inquisitiveness. The system must engage the user, the user must desire to continue to explore the work. This is a basic requirement of any artwork.

4.5 The paradox of interaction.

Representation of the response of the system back to the user is key to any interaction. Not only must one reduce human behavior to algorithmic functions, but one must be able to present to the user a response which can be meaningfully understood as relating to their current behavior. One can collect enormous sets of subtle data, and interpret it in complex ways, but if it cannot be represented back to the user in an understandable way, it is ultimately useless.

Having collected complex data with multiple variables, how do you build a rule-based system which establishes such fluid correspondences when the data base is a finite body of fixed video clips? The impossibility of this task was resoundingly brought home to me while making Fugitive. In the case of Fugitive, the sophistication of the response of the system had to be scaled back to a point where it could be represented by the video, the limitations of the rule based system which organises those clips into classes and the range of likely or possible behaviors in that circular geometry.

But images are complex things. Many types of information can be extracted from a single still image, let alone a moving image sequence. A major difficulty in the interactive scheme of Fugitive is for the user to determine which aspects of the images presented signify the expression of the system. Is the presence of red significant, the presence of water or a tree? Is it a question of the direction of movement of various objects in the image or the quality of the light? In Fugitive: subject matter, color etc, do not carry meaning about the state of the system. The aspect of the image which is the `voice' of the system is camera movement.

5. Conclusion

An artwork, in my analysis, does not didactically supply information, it invites the public to consider a range of possibilities, it encourages independent thinking. So building an interactive artwork requires more subtle interaction design than does a system whose output is entirely pragmatic, such as a bank automat. My work over the past decade has focused upon: the aesthetic design of the user experience given the diversity of cultural backgrounds and thus of possible interpretations; the development of embodied interaction with systems where the visitor is unencumbered by tracking hardware; the development of paradigms of interaction which go beyond state machine models to embrace and exploit Alife, emergent and social agent models. There is some divergence in current definitions of `autonomous agents', more in the term `socially intelligent agents'. While the works I have discussed are only marginally agents in the sense of "self-constructing, conscious, autonomous agents capable of open-ended learning", they do demonstrate a rich and complex interaction with the user.

I have emphasised the relevance of artistic methodologies to the design of social agent systems. Typically, artistic practice embraces an open ended experimental process which allows for expansive inventive thinking. Artistic practice emphasises the cultural specificity of any representational act, acknowledging that meaning is established in the cultural environment of the interaction, not in the lab. It emphasises the embodied experience of the user. And it emphasises the critical importance of the `interface', because the interface of the agent, like an artwork, is where communication finally succeeds or fails.

About the MFA Program

The MFA Program in Intermedia at the University of Maine has been developed over the last five years and has accepted its first full cadre of students for the Fall of 2008. For more information see our program web site at: http://www.intermediamfa.orgor email Owen F. Smith at:ofsmith@maine.edu