Friday, August 20, 2010

dispositif / mise en scène III

As my background is theatre and dance, my thinking is largely influenced by stage concepts, and how the 'stage" is created and comes alive in the mise en scène of a performance or an installation.

The digital dispositif - and here we think of the comprehensive environment for an interactive/real time performance or a participatory installation or mixed reality installation - offers an expanded notion of the stage. Due to the nature of the media and data flows involved (sound, video, graphics, etc), the performance range of "actors" - of the cast - extends to all elements and combinations of elements that are capturable and networkable. In terms of the networked environment created, along with recording/capturing technologies inserted, the performance of live media is re-presentable or transferable to multiple frameworks, if we think of screenings, live performance, instalation, television,online publication, telematics/telepresence, and various forms of digital dissemination (DVD, CD, tapes, mp3, etc).

During our discussions, Suzon brought the notion of "platform" into the round, and I would invite her to elaborate on her ideas here.

What came to the foreground on Day 4 (affecting the start up of Day 5) was a certain methodological restriction or limitation arising from the tracking stage (platform) as the primary device, so to speak. From the methodological perspective, the work done on the programming of a patch (using tracking camera and projector image down onto the white dance floor) set the scene, so to speak, and the patch designer then asked some of the performers to create an improvisation inside the nervous environment (I am refering to the historical precedent of David Rokeby's naming of his first interactional sensory sound environment as "very nervous system") - and Thursday evening this environment was created by Wendy Chu.

Here is an image from the performance on this platform, which ran on an Isadora patch using several actors (Eyes ++, Blob Decoder, Envelope generator, Mosaic, etc) that take the tracking information to disturb/manipulate the dotted grid pattern, an abstract patter, that is the base image of the projection. When human actors enter the platform, the graphic projection on the floor surface becomes animated.

(dotted landscape with music performer Victor Zappi in the center and Julia Alsarraff, on viola, on the left; James Cunningham is "upstage" and as yet invisible as the digital projection was the only light source, with the exception of a floor lamp stage right, which is visible here. Victor is operating the illuminated sound box in front of him. To be more precise, the illuminated box is amonome, a real-time step sequencer made up of a grid of backlit buttons that can be utilized for a number of applications, the most common of which is music performance. It is used to trigger and retrigger samples or sample sets, but can also be used as a generative instrument that runs self-effecting or self-sufficient patterns, or to control effects and envelopes).

Now, in computer science the term platform is used rather specifically in regard to computation, to platform theory, operation systems, tools, resources, principles and concepts related to coding. In the new media arts and social network contexts, the term refers to online or networked and (collaboratively generated) discursive platforms which can draw from a large range of fields of knowledge. For example, the courses in curating offered at the University of Fine Arts Zurich, describe their curatorial philosophy as follows:

The program focuses less on the ‘genius concept’ of the exhibition planner as individual author – a highly controversial topic since the 1990s – than on cooperative, interdisciplinary working methods, as employed, for example, in film productions or non-government organizations. Exhibition-making / curating means the creation of innovative structures for the presentation of cultural artefacts through interdisciplinary collaboration. In this field, art, digital media, design, and architecture intermesh in new ways. The manner of working employed by curators, artists, architects, designers, museum educationalists and writers has become increasingly unified, bringing about new forms of mediation, lounges, archives, reading rooms and new virtual forums – and with them new means of access and forms of interpretation. At the same time, we are witnessing a shift in the organization of work processes throughout society. Individual areas of action are merging on new meta-levels, namely those of networks and know-how transfer.

This conceptual overview – focussed on curating here but relating equally strongly to artistic creation and production and experimental research in the arts/sciences – bears directly on our lab process and discussions and the collaborative and processual aspects of the work and its manifestations. It has become clear that in our reflections we must ponder and address the changes in the processes of production, if we seek to position our work to specific audiences or to audiences at all.

But since our work experiments take place in the studio, there is a primarily physical (and site specific) architecture involved; the manifestations that happen here are at the same time recorded, discursively reflected, photographed/filmed and blogged/diffused. Digital components and patches can be crossed and exchanged, and this inter-connected method has been called cross-patching by Anne Nigten, the director of the "Patchingzone," a praxis laboratory where Master, PhD students and professionals work together on meaningful creative content (prior to her current position Nigten was the manager of V2_Lab, the aRt&D department of V2, and she has widely lectured on research and development in the interdisciplinary field from an art perspective.

Our lab, today on Day 5, truly resembles a kind of patching zone where 20 odd computer screens and laptops are illuminated in the dark while some members are repositioning screens on the tracking floor or moving cameras about. What is perhaps needed now is a reflection on how cross-patched platforms enable live media performance to find sustained or re-sustainable vehicles for content, for aesthetic experiences, for theatrical and dramatic action and story telling, for dance and music, and multimedia writing, the poetic as well as the subtly understated, rougher shades of the sacred.

IV. Real-time

The conceptual seminar on Day 5 was focussed on time/temporality in live media performance. The group began to look carefully at the meaning of the term real-time, and while initially there were more subjective and philosophical concepts brought up (relating to human experience of time, the past-present-future continuum, memory), the debate then moved to the more technologically inflected usage of the term, often applied to real-time synthesis (in music/sound processing) or real-time interactive interfaces (in computational performance or interactive design). From this initial discussion of real-time and delay or latencies (how computers processes data input) , which is a technical issue and often related to bandwidth, we also discussed differences in sensory perception (visual, auditory, tactile, olfactory, etc) and how they might relate to our knowledge or experience of time. After I asked a question about the "time" (durational experience) of the "loop" and how we perceive musical loops vis à vis various kinds of image loops), Marlon Barrios Solano, who is visiting the lab, suggested at this point to look at calendars as another metaphor for the construction (the arbitrary categorization) of temporality in our civilization. When Marlon argued that the calendar, with its days and months, generates a concept of the loop and of repetition, not everyone in the group agreed, and Tommy deFrantz pointed to specific corporeal differences (interestingly, Tommy also kept insisting in the our discussions on not forgetting or excluding the social and the sexual as important dimensions of movement/technology embodiment or entanglement experience). He responded by suggesting that human cognition and emotion works on complex levels that are not reducible to digital or mathematic logic, and that machine vision, as Mark suggested, can never be as intelligent as human perceptional systems in action at all times.

The time of intelligence, the temporal nature of analog performance and digital media (Victoria mentioning how in her early work she made music with linear video editing of tapes, while now she can edit in non-linear modes through the digital software that gives her a much greater range of possibilities improvisationally), and the experience of small loop samples (repeating quick time movies running in a patch) became a subject of a very engaged debate, while I was hoping that we could actually make a choreographic rehearsal experiment working with actors and images to figure out in a visceral way how time relations and time properties are connected on stage, and how we can carefully examine the particular nature/modes of interactive images (what are "interactive images"?) --- and here I was driving at the differentiation of abstract graphics and narrative, representational images.

The method for the exercise used three spatial fields (lit th spotlights) and an irregular diagonal across the performance space, with actors entering into the light. The musican plays one sustained note, to which the dancer on the left (in the picture) responds by imagining movement connecting the furthest point of his right hand to the left foot, while in the middle space a couple enters to re-enact from memory the actions they carried our in Ian's memory table installation. The viola player is captured by a camera, and a close up of her arm movement is projected in real time in a curved motion (from right screen to middle screen to left screen). The exercise lasted 3 minutes and allowed actors (and audience) to compare the time properties and spatial relations of each action. The live feed (camera) input/output action was fed through a filter that created a small time lapse.

The exercise was primarily intended to set up a theatrical scenario that allows for dramaturgical redevelopment, and the rehearsal was immediately opened up to the group and Tommy deFrantz took on directing the second version. There was narrative and dramatic potential in the scene, even if musical and digital (video) relationships were as yet completely undeveloped, but this was something I wanted to propose, to start out construction of scenic action material prior to patching/cross-patching, so that we could ponder the question of what kind of live images (media) might enter into a meaningful relationship with the human actors and their expressive, non expressive or gestural and emotional affect on the situation (and the space) as a whole.

It might be illuminating here to reflect briefly on William Forsythe's comments on his way of directing dancers in his company (at the time he produced "Improvisation Technologies" in 1999).

So I began to imagine lines in space that could be bent, or tossed, or otherwise distorted. By moving from a point to a line to a plane to a volume, I was able to visualize a geometric space composed of points that were vastly interconnected. As these points were all contained within the dancer's body, there was really no transition necessary, only a series of "foldings" and "unfoldings" that produced an infinite number of movements and positions. From these, we started making catalogues of what the body could do. And for every new piece that we choreographed, we would develop a new series of procedures. Some choreographers create dance from emotional impulses, while others, like Balanchine, work from a strictly musical standpoint. My own dances reflect the body's experiences in space, which I try to connect through algorithms. So there's this fascinating overlap with computer programming..

In the next section, I will try to depart from this Forsythe commentary and look at specific differences in contemporary dance between what Forsythe calls "experience in space" and what we, on the morning of Day 6, begin to see as a proper proprioceptive challenge of performing with an augmented reality environment or platform which is nervous, dynamic/responsive and generatively alive.

1 comment:

Talking about vision, in our brain areas V3, V4 and V5 have the specific task to analyze the visual input to find simple cues of direction, angle, movement and shape. More complicated algorithms are subsequently implemented, but the very first step is a real pattern recognition task.And as these patterns repeat over and over in a small amount of time [e.g. hundreds of simple shapes combined into the visual scene], I dare to call them "loops".

A similar mechanism works also for hearing and all other senses, making me think that the very base of our physical experience tends to be grounded on innate repetitive info atoms - loops.

But that's not all. I agree with Tommy, cognition and emotion are much more complex.

According to this, I believe that loops have the specific power to rapidly introduce the audience into a performance, into a message, as a universal language.But to really express the dramatic stream, we have to break loops, to do conscious variations of the path where we are leading the audience_

I also believe that the nature of the dispositif provides us with a huge amount of mods and tranformations of the path_