A teardown of the device, explaining what all the parts do (also includes some very interesting reader comments)

]]>https://hayles06.wordpress.com/2009/12/08/ocz-nia/feed/0melanieplagemanFrom Digital to Bio-Chemical Computationhttps://hayles06.wordpress.com/2009/12/07/from-digital-to-bio-chemical-computation/
https://hayles06.wordpress.com/2009/12/07/from-digital-to-bio-chemical-computation/#respondMon, 07 Dec 2009 18:24:14 +0000http://hayles06.wordpress.com/?p=371]]>In the context of the class, we have been focussing on the relations between literature and digital technologies. As my own work deal with bio or living architecture I thought I would post on digital architecture in order to articulate some relations it might share with living architecture. In fact, it is possible to make a direct connexion between the two. My readings on digital architecture (even though they are not exhaustive) have led me to understand that an important amount of the vocabulary used to qualify the new possibilities offered in the digital realm is issued from the biological one: variation, evolution, adaptation, mutation, etc. Various recent publications also foreground the relations between the digital and the biological: Greg Lynn’s Folds Bodies and Blobs, Lars Spuybroek’ Architecture of Variation, and Brian Massumi who has published extensively on architecture. Here I will focus on the Spuybroek and Massumi.

How to trigger change?

Digital Architecture and Dynamic Forms

“Deleuze and Guattari, following Bergson, suggest that the virtual is the mode of reality implicated in the emergence of new potentials. In other words, its reality is the reality of change: the event. (…) Technology, while not constituting change in itself, can be a powerful conditioner of change, depending on its composition or how it integrates into the built environment.”1

In the context of my work, as I explained in my previous post, I am interested in the potential for change: the potential for technology to facilitate the reconfiguration of our social ecology of practices. Here I would like to see what changes digital architecture can trigger in the built environment. Lars Spuybroek’s most recent publication address that question through looking at the ways in which architects today are “resetting the tools for design and creating a language that integrate variation and complexity.”

The book he edited on the topic, The Architecture of Variation, contains an interview with architect Ali Rahim. At the beginning the interview Rahim argues for one should understand the use of digital technologies not in relations to the possibility of increasing efficiency (which for him is the way that was mainly foregrounded in architecture) but rather to (1) “further design innovation and producing proliferating cultural effects” and (2) increase the potential for collaboration and cross-fertilization between different areas (for example, as I discuss the the cross-fertilization between the digital and the biological).

According to him it seems like what digital technologies have brought to architectural practice is the possibility to integrate real-time feedback from the environment into the design process. In this perspective, he says, digital architecture reverse the traditional design process: instead of integrating a pre-conceived design into the environment, it integrates the environment into the design process. As explained by Brian Massumi “this is because the software put into use [are] evolutionary rather than representational2.” “Rather than using traditional CAD software, where basic geometrical forms are reproduced and then modified or rearranged, architects employed special effects software where you start by programming a set of modifications before you have an object to modify — a potential modification”3. This way of doing architecture negates the linear cause/effect model and insist rather on feedback loops. Hence, it seems that what digital technologies bring to architecture is the potential for generating a reflexive and symmetrical dialogue between the built form and the environment: to consider them as co-operating and co-evolving. Accordingly, it would be correct to say that digital technologies insist on the processual dimensions of form generation. In this perspective the digital hold the potential to negate hylomorphism (the imposition of a form over matter) and to insist instead on “formation,” e.g. on the form’s processual dimensions, on the dynamism of its generation and on its potential to vary over time.

In Spuybroek’s edited book on the Architecture of Variation, Manuel De Landa argues for something similar when he makes the difference between properties and capacities. For him capacities are relational and is in fact a capacity to affect and to be affected. Following Rahim and De Landa digital technologies are bringing up front the capacity for an affective design where the form affects the environment but can now also be affected by its environment (and that on the level of design and not only when the form is physically built).

The key point is that digital technologies offer evolutionary, variability and connectivity processes to design. The abstraction made possible by digital technologies is what makes these processes possible. As Massumi argues “architecture has always involved, as an integral part of its creative process, the production of abstract spaces from which concretizable forms are drawn” Now with digital technologies “the abstract space of design is populated by virtual forces of deformation” and, I insist of transformation. Although he adds that “he virtual is a mode of abstraction, the converse is not true. Abstraction is not necessarily virtual.” For architects, the question concerns the ways in which the abstraction process made possible by digital technologies (the virtual forces of deformation and transformation it entails) can operate on the level of the virtual, that is how the abstraction process can trigger deformation and transformation.

Smart/Responsive Buildings

I think that what digital technologies have realized so far in relation to “living architecture” is the creation of responsive buildings. Of course they have also helped with the production of buildings that exhibit living qualities (mainly on the level of the visual form). However my interest is not based on buildings that look like living entities (visual form) but rather buildings that behave like living entities. Responsive buildings is an example. Even though they don’t necessarily behave like living entities, for me they embody the first step towards the production of a real living architecture. I will here give an example that will show that these buildings can adapt themselves to their environment but that unlike living entities, they don’t have the capacity to evolve, to change over time. The new Arts and Engineering building at Concordia University in Montreal is considered a smart or responsive building. One of the problems I have, especially with the discourses surrounding smart or intelligent architecture is that they are mainly understood in terms of efficiency, something that Rahim pointed out in his interview. In this efficiency perspective, intelligent/smart buildings seem to be mainly associated with two main ideas (1) environment friendly and (2) sustainability. At Concordia they created the building in this perspective. For instance, they equipped the building with moving sensors that are related to the lighting of the building. When there is no movement, the lights shut down. This might sound smart, but it seems like the engineers did not integrated the social environment’s feedback into the design. Indeed they did not take into account the fact that sometimes scholars only read in their office and that as a consequence their movements are fairly limited. For instance you sometimes see scholars moving their hands above their head to get back the light in their office! It seems that responsive environments deal with the question of the “information required to get a complex response”4 and that the architects/engineers of this building did not integrate sufficient information (virtual forces) into their design process).

Even though Rahim argues that “designing with the virtual abolishes fixed types and programs. Rather than housing a static, predetermined arrangement of functions within an established representational envelope, formations develop uses in response to their occupants and context. These uses are connected to the form directly rather than through representation” it seems that this still remains on the discursive level as building today are still pre-programmed to a variety of uses and they don’t necessarily hold the potential to evolve once they are built, that is to catalyze new usages. The Concordia Building is one example.

Even though I agree with Rahim on the fact that we should not think design only on the level of efficiency, design must have an objective. It seems that most discourses that deal with the integration of living materials and processes inscribe their goals in relation to environmental development. In this perspective, Rachel Arstrmong argues that projects like the Concordia Arts and Engineering Building deals with the “conservation of energy: alternative energy sources, efficiency and recycling which buy us time by reducing the production of greenhouse gases but do not combat the fundamental causes of climate change. According to her “these designs can be impressive in their complexity and metaphorical sentiment” but they only help us gaining some time as fundamentally, she says “they change nothing.” I think it would be correct to say that they present initial steps towards the emergence of real reflexive and symetrical relation between the buildings and the environment without necessarily fully actualizing this relation.

Beyond Gravity

Here I would like to discuss the work of the Polish Architect Zbgniew Oksiuta. Oksiuta creates what he calls biological habitats: spaces with dynamic membranes. He argues that the construction of a spatial boundary between an inside and its environment is the most elementary task of architecture. He adds that “naturally, separating oneself from the environment, creating barrier and walls, is also a central human activity5”. His creations speculate on systems/environments whose dividing border between the inside and the outside is not a foreign body, but rather an immanent component. Oksiuta creates spaces of dynamic liminality, transformative instances, uncertain spaces, spaces that act as associated milieus, as milieus of association. The link between his practice and digital architecture concerns the fact that he grows his dynamic membranes under water in order bypass micro-gravity conditions. In fact, many forms generated by computer-based design cannot be built in the physical environment as they don’t respect micro-gravity conditions. Consequently, Oksiuta’s creations provide a term of passage: they can be seen as a current model -an extension or a prolongation- of what is being done in the digital realm. In addition, his practice aims more towards a living architecture as it is a form of liquid architecture and it was shown in science that life requires liquid to emerge. Following my readings on vital individuation it also seems that in order to be alive, a system must have a membrane, but also a space of interiority. I think that the problem with responsive buildings is that they only succeed at generating a membrane that is unfortunately freed from a space of interiority where the potential for evolution actually resides.

From Binary to Chemical Computation

“The architect’s job is in a sense catalytic, no longer orchestrating. He or she is more a chemist (or perhaps alchemist) staging catalytic reactions in an abstract matter of variation, than a maestro pulling fully formed rabbits of genius from thin air with a masterful wave of the drafting pencil.”6

As I explained in my previous post, I recently developed an interest in protocell architecture. Protocells have not been fully designed in laboratories so far as nobody has been able to ensure their division/reproduction successfully. However the use of digital technologies is important in that field as scientist use simulation processes in their experiments. Computation is related to evolvability and programmability and can be extremely useful for the study of biological entities. Although it seems that the Turing machine and its related binary or digital code might not be of best used in the field of synthetic biology (the field in which scientists are concerned with the design of protocells). Rachel Armstrong notes that Ikegami argued that the only semantics we have so far is the one of the binary code and that it would be necessary in the long run to develop a “chemical computation” based on shape-shape relations rather than binary (which would mean the development of a shape-grammar). She says, following Ikegami, that “the semantics of chemical computing pose a significant obstacle to interpreting the results of chemical interactions since our current understanding of computer code is based on binary systems that are not expressed in more complex, analog systems like chemical reactions.7” In this perspective, she adds that

“Material computation is performed by molecules that are able to make decisions about their environment and which can respond to local cues in complex ways that result in a change of their fundamental form, function or appearance. Material computers are responsive to their environment and make decisions that result in physical outcomes like changes in form, growth and differentiation. These have already been demonstrated to take place in non-biological systems as early as the latter half of the 19th Century when life-like behaviours were reported from nonliving systems that were not based on cells or even cell extracts. There are many differences between material and digital computers but most arise as a consequence of the information in material computers being embodied in a molecular scale, physical system that possesses both mass and volume. The main advantage of material computers over digital computers is that these systems exhibit almost unlimited parallel processing power, which enables huge amounts of information to be processed and allows for multiple solutions to be found for any given problem. However, material computers are also limited by their physical embodiment, which slows down their huge powers of processing and contrasts dramatically with the instantaneous, massless computation that is characteristic of the digital domain.”8

This might be a very interesting analysis to produce on the semantic level, e.g. to look at the convergences and divergences between digital and chemical computing and to question how they could mutually influence each other. Although I think that we would first need a model to refer to that would help understanding how chemical computing differs form the Turing machine.

Lastly, I think that digital technologies offer very interesting tools for reflecting upon bioarchitecture but that the potential for generating a real bioarchitecture might in fact resides in pushing the limits of the digital realm to its extreme. I think that digital technologies can help to think how a bioarchitecture could emerge but that it might not be the digital realm that will ensure its actualization.

]]>https://hayles06.wordpress.com/2009/12/07/from-digital-to-bio-chemical-computation/feed/0marieperiSome Thoughts on Interactivityhttps://hayles06.wordpress.com/2009/12/07/some-thoughts-on-interactivity/
https://hayles06.wordpress.com/2009/12/07/some-thoughts-on-interactivity/#respondMon, 07 Dec 2009 17:39:30 +0000http://hayles06.wordpress.com/?p=369]]>In The Language of New Media, Lev Manovich explores the topic of interactivity with new media objects. I have attempted to summarize his claims concerning this topic:

Contrary to her impression upon use of the new media object, the user is not a co-author; she is instead forced to follow a predetermined path, stripping her of agency and, eventually, her ability to think for herself. The user is allowed to select chunks of content, offering her the illusion of interactivity. The only interactivity that is occurring here, however, is that of the utilization of the user’s cognitive output as she uses the structure of the program as a program input. The user, through this structured selection, sees the program as fully customized and reflective of her personal preferences and ideas, assuring her of her uniqueness, and thereby supplanting her need for personal associations with hyperlink associations generated by the program that are then accepted by the user as an externalization of her own thought process. The user then learns to prioritize selection within the context of any program over that of personal evaluation, and the line between information access and psychological engagement is blurred, making both navigation and immersion difficult and leaving the user dependent on the program for navigation as well as for providing the path to a finished creative product that once would have been the result of her own psychological engagement.

My purpose is to explore and also to contest these ideas in light of personal experience with and knowledge of a few current new media objects.

A new type of interaction with new media objects has begun, with applications that perform a task so specific that the act of simply activating them is a declaration of intent. Instead of searching and directing our own navigation on an all-purpose search engine, we search a library of applications in order to find one that will navigate these types of queries for us. We navigate the world of many mini-navigations. Instead of trying different combinations of keywords in search of the phrase that will yield “good lebanese restaurants within 20 miles that are within 5 miles of a Target” we search for “iPhone applications restaurant locator augmented reality,” download it and then activate it by touching the Yelp! icon when we are in need. In the most absolute sense, the user is following a “pre-programmed” course of “objectively existing associations” (61)–in fact the only user input here (once the application has been downloaded) are the actual choice to activate the app and the automatically determined GPS coordinate location of the user.

In this way, the user’s direct engagement with the application is structured and objectively orchestrated, but on a different level, she is an agent. Yes, she has selected a certain group of smartphone applications from a library or database that she has somehow decided will serve as her tool set for performing daily tasks, thereby literally manifesting Manovich’s ‘selection logic,’ but she has consciously chosen the structure of her tool. In the case of Yelp!, she has chosen an application that uses the method of collective filtering as the primary structural element directing navigation; in choosing, this application she has also chosen collective filtering as her own method of addressing the task at hand. In this case, the method of collective filtering is transparent–it is the marketed feature of the application. Niche-use applications such as Yelp! compete for users not based on what they do but how they do it. Featuring the way by which the program sources information has become essential, as the usefulness of an application is dependent not on what it does, as the user is assumed to have already bought into this concept, but on how well the method of doing what it does works. There is a clear goal that the program is designed to reach and its ability to do so relies on its method. The user chooses to outsource navigation of a specific kind to a program that uses a certain method for anticipating the user’s desires and access/engagement needs. The user could have chosen other applications serving the same information access purpose that run on completely different but equally apparent predetermined methods of data processing. Other niche-use applications, such as Pandora, rely on algorithmic analysis of program data combined with user-history filters that generate “suggestions” based on certain matches of specific data characteristics–i.e. “if you liked this, you probably will like this similar media object.” Thus, in consciously choosing her method, the user is providing for at least the possibility of conscious cognitive synthesis, considering the output of an application one factor in a much more complicated network of associations, interpretations, and information leading to the formation of a decision.

Given my lauding of the agency inherent in the opportunity to select an application based on its method of processing data, it is easy to assume that I address only applications with a single purpose and scope. This claim of agency in transparency of method is problematized by applications that have developed multiple functionalities with different methods of processing and presenting data as well as different methods of structuring interaction amongst the different functionalities. UrbanSpoon (also a restaurant locator) is an application with the singular function of providing access to an navigation of information. Other applications, including now Yelp!, have developed, often through a series of ‘upgrades,’ additional functionalities including that of social and psychological engagement. In Yelp!, the user can review restaurants herself as well as accessing the reviews of others, and, using the GPS coordinate location input, can view other users’ current locations. The user now has the ability to initiate a real-time online chat with other users. This is often used to initiate online conversation with a user at a restaurant or bar with the intent of requesting current information about the restaurant such as how crowded it is or if the specials are any good that day. This function can also be used to organize ‘spontaneous’ get-togethers of friends who happen to be in the same area near a favorite location or even as an advanced online dating tool, allowing users to view one another’s profiles to determine compatibility before initiating chat and utilizing mutual knowledge of geo-location to orchestrate a meet-up.

While these applications may not encourage the same conscious choice of method on the part of the user, the user still forms the personal associations particular to internal cognition and agency. Manovich’s notion that “before we would look at an image and mentally follow our own private associations to other images, [whereas] [n]ow interactive computer media asks us instead to click on an image in order to go to another image” (61) excludes the possibility that personal associations might be interwoven with the hyperlink associations generated by the program. Upon activating Yelp! while driving, the user might see highlighted a Moroccan restaurant coming up on her left. This result may be highlighted as a result of her past frequently high ratings of Lebanese restaurants and of the consistently good ratings this restaurant has received from other users or even of the presence of a friend at the restaurant at that time. However, the user is by no means guaranteed to unquestioningly turn left and enter the restaurant and enjoy the food. The user may have a particular dislike of Moroccan food or a desire to eat alone. She may even choose to disable the app in order to listen to a podcast of which a personal association with Moroccan food reminded her. The restaurant app did not anticipate, nor does it benefit, from this kind of tangential personal association. The limited purview of many new media applications may actually reinforce the user’s consciousness of the role of her personal evaluation, as well as an awareness of her intent in using the application, whether for navigation and access or psychological engagement or both. The user may find psychological engagement with another user via chatting on Yelp! difficult, but it may be the possibility of access to real life initiation of psychological engagement with fellow restaurant-goers for which the user has selected the application. The user may fully intend to use the information gained from this application to generate and control an immersive psychological experience with another person during which she would be assessing compatibility based solely on the interaction itself and not on the person’s user profile. The user’s choice in managing and layering the influence of the program data and her parallel real life informational input in order to create a mixed reality represents a different type of user agency

This user has selected an application and will process the application’s output in a way that utilizes personal associations and occurs as some form of internal cognition, but through what process does the user do so and given this what is the user’s output? The application, here Yelp! again, is a hypermediated environment in which the user has agency through the act of remediation. She imports the application’s output, separating the spatialized augmented reality model, the user reviews, the real-time interactions, the representation not only of physical locations but also of possible physical experiences (the implied act of eating, enjoying entertainment, etc.), and then processes it in an internal cognitive space in which personal associations augment and complicate the output which has become user input and then eliminate, translate, link, problematize, resolve, analyze, and refashion this complex media input and her associated information. The result is an action or decision that is much larger than the output of the predetermined structure of the program itself. This action or decision may manifest in a future rating, review, or locative event that will feed back as input to the application. However, the user willingly offers up her cognitive labor as program input with the goal of receiving more efficient and organized access to a vast database of information previously unknown to the user and of such a great quantity that the user could not sort on an unguided trajectory (or does not wish to) thereby making accessible information she would, in all likelihood, not have been able to access without a programmatic structure. This then informs her personal cognitive processes, which she can then choose to offer up to the application as input, with the goal of increasing the future usefulness of the application for her own purposes.

The user also engages in a kind of macro creativity of association, combination, and, sometimes, advanced augmentation. This kind of augmentation occurs on a spectrum. A user may choose to use a variety of applications in order to foster real life collaboration. The user and her collaborator may employ a mixture of new media applications, old media objects, online interactions, and real life interactions. This collaborative project does not preclude immersion in either engagement with the collaborator or the process of creating the project. The user switches between accessing information, identifying and sharing tools, immersing in work, and immersing in the collaboration, all the while layering and selecting the tools most suited to the task, time, or location. This example of the creation of a mixed reality (as discussed above) has not caused the users to “to mistake the structure of somebody else’s mind for [their] own” (61). It has allowed them to utilize the work of the minds of others to engage in a different way with work of their own.

On a different end of the spectrum of augmentation, the user may engage in hacking or re-programming of a new media object itself. This may involve exploiting the hardware of a device through re-programming it to run custom software that allows it to perform a different extended function that the user deems more personally useful. Or it may involve creating a personal software application complete with the desired tools or functions through the utilization of both/either the framework/structure of the program itself (algorithmic processes) or the ideas behind the structure of the program (input and output formats, means of access, interface style, method of acquiring user input or external input, etc.). As Manovich points out, “instead of identical copies, a new media object typically gives rise to many different versions” (36). However, these versions are not necessarily generated from a selection of templates or by “simply clicking on ‘cut’ and ‘paste,'” (130) as Manovich has claimed are the operations possible in new media applications. Here new media inspires a kind of variability not generated from a predetermined tree of selection options but from genuine creativity on the part of the user.

While “[p]ulling elements from databases and libraries” may be the default method of operation within a particular application, it is misleading to say that “creating [elements] from scratch becomes the exception” (130). The user may select applications from libraries and use discrete ‘elements’ from databases which are processed according to the specific method of structure of the application, however, the agency and creativity have not necessarily been supplanted. The user’s agency and creativity lie in conscious choice of program structure, personal associations and interpretations of program output data, re-programming of the applications themselves, and immersion in experiences which are inspired by and interwoven with these applications.

]]>https://hayles06.wordpress.com/2009/12/07/some-thoughts-on-interactivity/feed/0melanieplagemanWhat to do with “I”https://hayles06.wordpress.com/2009/12/07/what-to-do-with-i/
https://hayles06.wordpress.com/2009/12/07/what-to-do-with-i/#respondMon, 07 Dec 2009 08:24:59 +0000http://hayles06.wordpress.com/?p=367]]>It just occurred to me as I start to file away the various materials that I collected over the span of this semester, that a good portion of the work we discussed in this class won’t fit, one way or another, into the extra-large manilla file that I labelled “Art & Lit in Dig Dom.” I don’t have a copy of the Breathing Wall, or the requisite system/accessories to run it; I don’t know how long many of the “assigned” links will remain active; My class notes criss-cross from a composition pad to floating word documents; The final project that Whitney and I produced isn’t in either of our possession. Without dragging out the all-too-obvious allusion to Diana Taylor’s lecture “The Digital as Anti-Archive,” this situation prods me to consider what is to come of our involuntary memory systems, especially with respect to the arts. Back in August, a friend and fellow poet recommended this class to me, urging me on the grounds that it would introduce to me new ways of thinking about poetic form. Gesturing to the material conditions of digital poetry, he made a comparison to the popularity of typewriters to modernist writers, how the ability to set your own type relaxed the conception that poetry is smooth on one side and bumpy on the other. Looking back however, over my errant swaths of notes, it seems that the biggest change that seems intrinsic to the switch from paper-based composition to computer-based is one that is at the heart of generic theories of the lyric. That being the distance between the speaker of the poem, the figure of the poet, and the flesh-and-blood human who in most cases composes the verse. Marshall Brown sums this contention up in his essay “Negative Poetics”:

The poem says,” “the speaker says,” “Wordsworth says.” In our everyday critical usage these three assertions become indistinguishable. But they shouldn’t be. Instead, there is a speaker and there is a poet who gives the speaker voice. And there is a poem, which is a combination of the two voices-speaker and poet. The speech and the voice that re-cites the speech operate in tandem to give poems their depth.

To see the “I” personal pronoun in a paper-based poem is to ask the question to whom does that refer. If the same pronoun shows up in a digital poem, the question of authorship, especially in the case of a randomly-generated recombinable poem, is severely complicated. Does such a complication compel us to refigure the already muddled discussion of the lyric genre in order to account for this new signification. Or does it pose the “I” as ineffable and de-emphasize staid notions of subjectivity and authorship? I leave you/us with a quote from Chistopher Funkhouser (see his essay “Digital Poetry” in A Companion to Digital Literary Studies) on the banal possibilities and marvelous impossibilities that await the poet who longs to make poems that won’t fit, one way or another, into a manilla folder :

Author(s) or programmer(s) of such works presumably have a different sense of authorial control, from which a different sort of result and artistic expectation would arise; consequently, the purpose and production would veer from the historical norm. Because of this shift in psychology and practice, digital poetry’s formal qualities (made through programming, software, and database operations) are not as uniquely pointed and do not compare to highly crafted, singular exhortations composed by historic poets.

In this project, we examine a variety of critiques articulated in discourses on the history of cartography. At the dawn of wide scale technical and cultural transformations made possible by the current development of digital technologies, we propose a number of visual strategies to creatively and critically engage with these critiques.

The design gestures toward our specific interests in mapping. Experiments with surface and depth; background and foreground; scale; layering; connectivity and disjunction; shifting, stasis, and time; the visible and invisible; unhinged structure; and importantly, navigation that continuously reformulates through interaction. The design is a type of aesthetic mapping that moves us toward the kinds of maps we envision.

Four different node clusters cut across the design experiments. Their relationalities emerge as you explore and experience the site, generating a map in dynamic flux.

This site is best viewed at 1920 x 1200 resolution on a large screen with the Firefox web browser.

]]>https://hayles06.wordpress.com/2009/12/04/mapping-movement-with-moving-maps-unfixed-grids-and-dynamic-meaning/feed/0zachblasCharles Bernstein on Serialityhttps://hayles06.wordpress.com/2009/12/04/charles-bernstein-on-seriality/
https://hayles06.wordpress.com/2009/12/04/charles-bernstein-on-seriality/#respondSat, 05 Dec 2009 00:17:04 +0000http://hayles06.wordpress.com/?p=357]]>In reading Charles Bernstein’s essay on Charles Reznikoff, “Reznikoff’s Nearness,” I found a quote that, at least for me, helps to locate digital poetics in the context of print-based models. His closing remarks on hypertext’s distinct penchant for nonlinear readings point to the medium’s potential for poetic seriality. A question that occurred to me is how much does a work of recombinate literature privilege readings that weigh heavier on form than content (by this I mean addressing the text (content) only as a means to discuss the more conspicuous element, its recombinable form). I shudder in dividing a poem, no matter its medium, into categories of form and content, but it seems to me that recombinate poems share with sound poetry a resistance to close reading. I’m wondering if this is a characteristic generalizable to the bulk of contemporary conceptual writing? Another trait unifying paper and digital poetries?

(By the way, if you’re not up on Reznikoff, I highly recommend his book Testimony…a collection of poems that take their language and occasion from early twentieth century court records.)

Here’s the Bernstein:

There are a number of serial works that are not intended to be read only or principally in the order in which they are printed. (Serial reading opens all works to recombination. My favorite image of readerly seriality is David Bowie In Nicholas Roeg’s “The Man Who Fell to Earth,” watching a bank of TVs all of which were rotating their channels.) Robert Grenier’s Sentences—five hundred discrete articulations each on a separate index card and housed in a blue Chinese box—is the best example I know of extrinsic seriality, though two other boxes of cards also come to mind: Jerome Rothenberg and Harris Lenowitz’s Gematria 27 (twenty-seven recombinable numeric word equivalences) and THomas Mc Evilley’s cubo-serial 4 (forty-four four-line poems). In principle, hypertext is an ideal format for this mode of composition since it allows a completely nonlinear movement from link to link: no path need be specified, and each reading of the database creates an alternative series (The Objectivist Nexus 222)

]]>https://hayles06.wordpress.com/2009/12/04/charles-bernstein-on-seriality/feed/0petemoore328More on Inter(intra)faces and Inter(intra)active Arthttps://hayles06.wordpress.com/2009/12/01/more-on-interintrafaces-and-interintraactive-art/
https://hayles06.wordpress.com/2009/12/01/more-on-interintrafaces-and-interintraactive-art/#respondTue, 01 Dec 2009 20:46:46 +0000http://hayles06.wordpress.com/?p=355]]>I would like to bring some examples to supplement my previous post on interfaces. I will engage with two art examples that will help me to make “visible” the maybe “too philosophical” comments I made. I will start by giving a brief explanation of the two pieces I wish to focus on, and then relate them to the points I raised in my previous post: intrafaces, intra-activity, event value, staging situations. I will also relate them to other notions such as micropolitics, microperception, the collective. Lastly, I will articulate their relations to digital and analog processes.

Voz Alta – Rafael Lozanno-Hemmer

The first project is Voz Alta (2008), a piece by Montreal based artists Rafael Lozanno-Hemmer. I will quote Lozanno-Hemmer’s description of the project as I don’t think I can explain it better than himself!

“Voz Alta (Loud Voice) is a memorial commissioned for the 40th anniversary of the student massacre in Tlatelolco, which took place on October 2nd 1968. In the piece, participants speak freely into a megaphone placed on the “Plaza de las Tres Culturas”, right where the massacre took place. As the megaphone amplifies the voice, a 10kW searchlight automatically “beams” the voice as a sequence of flashes: if the voice is silent the light is off and as it gets louder so does the light’s brightness. As the searchlight beam hits the top of the building of the Ministry of Foreign Affairs, now Centro Cultural Tlatelolco, it is relayed by three additional searchlights, one pointed to the north, one to the southeast towards Zócalo Square and one to the southwest towards the Monument to the Revolution. Depending on the weather, the searchlights could be seen from a 15Km radius, quietly transmitting the voice of the participants over Mexico City. Anyone around the city could tune into 96.1FM Radio UNAM to listen in live to what the lights were saying. When no one was participanting the light on the Plaza was off but the three lights on the building played back archival recordings of survivors, interviews with intellectuals and politicians, music from 1968 and radio art pieces commissioned by Radio UNAM. In this way the memory of the event was mixed with live participation. Thousands of people participated in this project, without censorship or moderation. Participation included statements from survivors, street poetry, shout-outs, ad hoc art performances, marriage proposals, calls for protest and more”.1

D-Tower – Lars Spuybroek

The second is D-Tower, a project by Dutch architect Lars Spuybroek. The D-Tower is a sculpture in the town center of Doetinche in Holland. The sculpture is hooked up to a website where the town residents are asked to complete a questionnaire about their mood. An example of question/answer is: ‘Are you happy with your partner?’ Possible answers: ‘very much’ – ‘yes’ – ‘a little’ – ‘no’ – ‘absolutely not’ – ‘not applicable’. Each answer has a score2. The D-tower uses the questionnaire to record emotions: the statistical results of the survey are sent to the D-Tower, which changes colour according to the results. The questionnaire contains 360 questions. Four new questions are made available every other day.

Interfaces and Interactivity

My interests in these projects is that their interfaces are not only based on a series of action/reactions. Even though one could understand the installations according to the action/reaction logic they both go way beyond it. If we were to understand these installation in relation to the action/reaction (or encoding/decoding) logic, one could say that for the D-Tower, the survey is the action and the tower colour is the reaction. For Voz Alta one could say that the act of speaking in the megaphone is the action and that the resulting distribution of the lights (and also the distribution of the messages on the radio) are the reactions. To me, however, the D-Tower is more based on the social interactions (mood) that are encoded in the survey and on the interactions that result from the tower’s changing colour. I also think that The Voz Alta project operates the same way: it is related to (1) the massacre (2) the people talking on the microphone (3) the light distributed in strategic points of town, and also (4) the people listening to the radio. Accordingly both these projects seem to be more based on staging social relations, e.g. on relationality rather than on actions/reactions. Both theses projects hence are social before being technological, which means that the technology only gets its consistency when it is incorporated and given form/meaning by and through social assemblages.

In addition, it appears to me that these projects exhibit what I would call “distributed interface”: their respective interfaces are in fact not localizable in a specific place. Hence, they becomes intra-faces: they ensure the linkage (intra-relations) of the different levels of the situation staged. For example, one could argue that the D-Tower project’s interface is the online survey but also the tower itself. To me, the fact that the interfaces are “distributed” makes these projects operate on the level of intra-action. It is the relations between the survey’s answers, and the tower changing colours, and the effects generated in the way the people interact with each other that constitutes the whole system functioning (the whole system is intra-structured by and through the various relations at play). In fact, these various relations operate as intra-actions, and these intra-actions participate in the intra-structuration of the whole system. In the context of Voz Alta one could ask what or where is the interface: the megaphone, the computer that transduces the messages and that activates the lights, the lights projected on the buildings, the distribution of the messages on the radio? To me, the most interesting way of looking at these projects is to actually refuse to reduce them to a material or localizable interface but rather to think about them as intra-active projects: to incorporate the whole set of relations at work. These projects are concrete examples of what I called an intraface in my previous post: their operate more on the level of the relations, they facilitate the emergence of relationality rather than being based on the use value of the interface (in fact they integrate the heterogeneous modalities that Suchman talks about: human, non-human, technique, discourse (my emphasis), artifacts, living, non-living, etc. These projects all take a specific situation as their object. Indeed, their realization is not based on the efficiency of the interface but rather on its power to facilitate the emergence of new forms of relations. Of course Lozanno-Hemmer and Spuybroek are very professional artists who work with skillful people, which reduces the possibilities for the technical components to be dysfunctional. However the point I want to make here is that the intraface is not necessarily technological, and that the realization of the projects do not foreground the transparency of the interface but rather various levels of relation -a series of intra-actions- that generate and are generated by a distributed interface, by an intra-face.

Both these projects take seriously what Brian Massumi thinks is the strength of interactive art (according to his arguments, it might be more appropriate to talk about relational art), that is to say taking the situation as its object. The D-Tower takes the citizens mood as its situation rather than the technological components that ensure the realization of the project. Voz Alta also takes the massacre situation (and also the pirate radios) as the situation. In doing so, they do not subordinate the event to the technology but rather put them in a relation of co-extensivity. Accordingly, they both trigger the emergence of new forms of relations. Here the situation, along with the technological components used to stage it, become co-relates in the context of the art installations. It is in this sense that both these projects operate for me on the level of intra-faces.

Co-evolution and Co-operation

D-Tower and Voz Alta both operate according to the co-evolution logic I talked about in my previous post. For instance, the D-Tower changes according to the citizens mood. In this perspective both the citizens and the tower “become together” according to a logic of co-operation. The affective mood of the citizens affect the colour of the tour which in turn affect the interactions of the citizens. The co-evolution is actualized according to a very interesting feedback loop between the citizens and technology. Voz Alta operates on the same level. The light is co-evolving with the messages recorded by the megaphone, and transduced by the computer. In addition, the people listening to the radio co-evolve with the messages transmitted, as these messages are certainly affecting them. This set of relations generates a feeling of togetherness between the messages, the light and the radio listeners. This togetherness performs relations of co-evolution and co-operations I talked about in my previous post.

Politics and Perception – Micropolitics and Microperception

One of the things these projects foreground that I think is extremely relevant is that they are political in effect and not necessarily in content. Even if they are all related to a more or less “political” situation: the social relations for the D-Tower and an extremely charged political event for Voz Alta, they also operate on the political level through the effects they generate. The D-Tower play with the politics of social relations, making the citizens aware of the collective mood. Voz Alta also plays with the politics of social relations, offering new forms of engagement with the massacre that most likely generated new forms of social relations, new forms of social engagements amongst the citizens.

Recently I became very interested in micropolitics. With the people form the Sense Lab at Concordia University we have been engaging with the notion of micropolitics for a whole year. Following this engagement, Nasrin Hamada and Erin Manning edited a whole issue of Inflexions on the topic. In the issue there is a very interesting interview with Brian Massumi that was conducted by Joel McKim4. In the interview Massumi argues that micropolitics are politics of microperception and that microperception is another way of talking about affect. I think that in the context of interactive art, it is important to question the potential to trigger new, e.g. non-traditional modes of perception. I also think that that affect can help us understand the notion of the threshold that I questioned in my previous post (following the fact that Zach raised the issue). If we take both Bobette and Massumi’s comments about micropolitics, what would it mean in the context of the projects I talked about to say that their politics is only enacted through perception? And that their politics of perception ought to be politics of microperception? It would mean in fact that they both have to be analyzed on the level of affect. Affect is the capacity to affect and to be affected, it is a change in capacity that is carried from step to step. I think that both the projects operate on that level. The D-Tower is affected by the mood/interactions of the citizens which in turn affect their mood/interactions. Voz Alta lighting is affected by the citizens messages which are in turn affected by the brightness produced. Although affect, following Massumi, is more complex than the capacity affecting and being affected, as it is only visible or perceptible in its effect as it operates on the non-conscious level. According to him, microperception (or affect) “is not smaller perception; it’s a perception of a qualitatively different kind. It’s something that is felt without registering consciously. It registers only in its effects.5” He adds that “affect and microperception are always related to a shock.” Affect, he says “is inseparable from the concept of shock. It doesn’t have to be a drama. It’s really more about micro-shocks.6” I think that affect is in fact this passing of a threshold that generate a change in capacity. The D-Tower triggered a change in capacity through generating micro-shocks. These micro-shocks were actualized through the encounter with the tower. People might not have registered the colour of the tower consciously but it was most likely felt in its effects, that is to say in the ways in which it affected their interactions with the citizens. This was made possible by the feedback loop between the private mood and its public image/manifestation. In the context of Voz Alta, the fact of enlightening buildings that are considered as power and political centres is not traditional and might in fact generate micro-shocks in the population. In addition, the massacre was considered a taboo for many years. The simple fact of giving it a place in the collectivity certainly participate in generating these micro-shocks. As those cannot be registered consciously, it is in a way impossible to qualify them (as Massumi puts it: affect is unqualified7). My interest in bringing this is to emphasize once again the importance of staging the situation not in the form of action/reaction that are encoded/pre-encoded and therefore pre-determined but to stage the situation in the form of an open situation that leaves space for micro-perception and micro-shocks to take place.

Collective

My interests in the collective comes from micropolitics. According to Deleuze and Guattari, micropolitics is realized by and through the minor collective assemblage. To me, both Voz Alta and the D-Tower foreground the collective. The D-Tower makes visible the collective mood and generate effects on the ways it gets assembled. In fact, if the tower turns out being a colour that means people is angry or mad (I am sorry I tried to find the colour chart but I could not) it might make people being more attentive to the people around them. Giving a visible colour to the people’s invisible mood holds the potential to generate effects on the social ecology of practices, on the ways in which the city inhabitants interact with each other. In this context, intra-actions are addressed on at least two different levels: (1) the interactions of people that are being compiled following the survey and (2) the potential for new forms of interaction to emerge from the colour of the tower. As Brian Massumi puts it “this can undoubtedly reflect back on the interactions taking place in the town by making something that was private and imperceptible public and perceptible.”8 He adds that in this context “the feedback loop here has been created between private mood and public image that has never existed in quite this way before9”. Again, I think that this project is also related to the notion of the threshold that both Zach and I have been interrogating. The colourful collective mood can be seen as the potential for change in capacity: the capacity of the tower to affect the citizens and to make their mood pass a threshold that will push them to interact differently with the other citizens.

In the context of Voz Alta, giving space to the massacre most likely had similar impact on the social ecology of practices. On the one hand it foregrounds the collective in the first place: its departure point is the collective itself. The massacre affected the collective and it is put back into the collective. In addition its effects are addressed to the collective through the distributed interface that generated the possibility to relate heterogeneous components of the situation. It probably also regenerated social relations in the community through giving the citizens the possibility to express themselves to the population (and indirectly to some political charged places: the buildings where the light was reflected). It also gave them the feeling of sharing an event.

Digital versus Analog

Some of you could say that the projects I talked about, and the analysis I produced, do not address the digital in a direct way and that I failed at foregrounding the potential for the digital realm to trigger new forms of experience. According to me, new forms of experience and the emergence of new subjectivation processes is based on change. We have to focus on generating new subjectivation processes that don’t reproduce the dominant ones. This, for me, is made possible by working with the potential and the virtual: we need to ask what potentialites, virtualities and impulses hold the potential to generate change. In chapter five of Parables for the Virtual (the chapter is entitled on The Superiority of the Analog) he says that “digital technologies have a connection to the potential and the virtual only through the analog”. All these projects I talked about in the context of this post highlight this passage from the digital to the analog. The D-Tower codes the survey’s answer which are then transduced into the analog through the tower’s changing color. Voz Alta encodes the sounds, voice, messages that are also transduced into the analog with the light that reflects on the building. In this perspective, the key point with digital art seems to reside in the project’s capacities to transfer the code into analog processes. I would argue that the potential for the micro-schocks to be actualized resides there, taht is to say in the transformative process of the digital to the analog. So I think I could go as far as saying that -and here following the projects I analyzed in the context of this post- that with digital art, the threshold -the potential for a change in capacity- resides in the passage from the digital to the analog.

]]>https://hayles06.wordpress.com/2009/12/01/more-on-interintrafaces-and-interintraactive-art/feed/0marieperiThinking with Mediahttps://hayles06.wordpress.com/2009/11/23/thinking-with-media/
https://hayles06.wordpress.com/2009/11/23/thinking-with-media/#respondMon, 23 Nov 2009 17:16:07 +0000http://hayles06.wordpress.com/?p=346]]>This is an excerpt from the critical statement I’m writing for Clarissa/my graphic novel project, which can be viewed here:

This project is definitely still in progress, but the process of forming the concept of the story, writing what we’d roughly call “part one” of the story, and only after that being able to do all the drawings and their accompanying perspectival labor—these drew my attention to a variety of mediation challenges that I otherwise would not have thought of.

First of all, writing a graphic novel is nothing like writing a novel or a movie. As Clarissa and I brainstormed the story, we wrote down character dialogue that was roughly in the format of a drama, with dialogue attributed to each character, as well as background descriptions of what was or had been going on prior to that scene, and stage directions (“he furrowed his brow”). We spent the most time coming up with this story with Clarissa’s original vision (of creating a graphic novel that investigated the production of knowledge and its relation to Being) clearly in mind. When we finally had a working script written, then I sat down to sketch out the drawings and realized, wait a sec, we have entire paragraphs of dialogue at a time for some characters—how is this going to fit on one page, much less one panel? That’s not the way I’m accustomed to seeing words laid out in typical graphic novels, except slightly in more academic ones like Logicomix, a graphic novel about the history of logic and mathematics written by Apostolos Doxiadis and Christos Papadimitriou. As Clarissa worked on figuring out how we would present our comic on a webpage, I tried to figure out how much dialogue I would include on each page, and also how to parse the play-like script of what we’d written into somewhat coherent page-sized frames, where each comic strip/page as well as the story as a whole made sense. This realization that a graphic novel’s format lies somewhere between a photo-journal essay and a dramatic play may seem obvious in retrospect, but honestly, it was one of the most surprising things to realize mid-way through the project.

To reiterate the situation, which again caught this writer by surprise: it IS NOT NATURAL for most people to “imagine” in the format of a graphic novel, because they aren’t as popular as movies or books. When we were visualizing how the story would proceed, we were IMAGINING it as a MOVIE or a NOVEL or sometimes a PLAY. It took a radical shift in IMAGINATION to start THINKING in GRAPHIC NOVEL terms, to learn to simultaneously NARRATE AND VISUALIZE in snapshots, with shorter dialogue, and with the illustrations as much of the story as the text. I don’t think we really did this at first, and it put me in the position to “remediate” (despite our classroom critique of the term, it feels appropriate to use here) our original script into comic format. In fact, this experience makes me want to question the appropriateness of calling graphic novels “novels” in the first place. Yes they tell a story with an immense amount of graphic detail about the world, and both participate in storytelling, but that’s about all the similarities I can think of at the moment. There are huge differences in how each particular media style both constrains and opens up creativity: novels use descriptive passages to set up their visual argument or milieu, while graphic novels never SAY anything about it, they just show it; novels can be as long as the writer wants, whereas graphic novels have a limited amount of text that would be considered “pleasurable” to read; one could go on about the differences, but let me highlight just one more. I would put graphic novels in same category as television/movies (rather than traditional novels) in this particular respect: attention. Using Kate Hayles’ terms, narrative visual media lean towards the “hyper-attention” end of the spectrum, whereas the novel allows for a kind of deep-attentive experience.

I think what I most gained from working on this project was a greater awareness of the way in which available media—primarily video, novels, and radio—structure the way that we choose to imagine stories, and how we imagine the way the future will unravel. It’s no accident that many people fantasize about how their lives might unravel cinematically, with some kind of epic finale at the end involving a 360 degree panoramic shot of them kissing a lover, or of them facing off against an opponent in the heat of battle, or other dramatic moment. Our media both visually and narratively present particular RHYTHMS and FORMS of storytelling, which we can imaginatively inhabit in order to entertain ourselves or speculate about the future. It’s not easy to change the media that you think with, but let’s face it: we all think with media. But I believe there is something positive about the ability to think with different kinds of media, and to choose… a circumstance like the experience of being bi-, tri-, or poly-lingual.

Here’s an experiment. Try for just one day to use a media format other than a movie when you plan your day (how you’re going to get groceries, go to the library, meet up with your friends at a restaurant or bar, etc.), or drift into daydream, or speculate about the future, etc. Try IMAGINING these things some other way—in snapshot action-packed graphic novel format, for example, or as a radio-play without images and just a narrative voice, or something else. Is it easy or difficult? Constraining or liberating? beyond categories?

As to the structure of the website: originally I’d made a suggestion that we could play off the physics’ paradigm of “breaking symmetry” by thinking of aesthetic ways to do that. However, in the end that didn’t seem feasible, so the model we went for was “thinking about the Large Hadron Collider” (maybe we should have written, thinking “with” the LHC?). To present this we have the comic, a twitter-feed with live updates from CERN, and a series of fragments spoken by the LHC itself that we’d experimented with. These three parts provide different temporal experiences of thinking about/with the LHC: live updates from CERN’s twitter feed bring a sense of immediacy; the LHC monologue fragments are drawn from a database of pre-written script but refreshed at random, making the connections between the comic panel you’re reading and the randomly generated fragment into something that spontaneously comes to you and changes the reading experience.

One of the main difficulties we were working with was how to incorporate some kind of perspective from the now-conscious LHC itself into Clarissa’s vision of a comic that was dealt with knowledge production in a way that was difficult to distinguish from reality. With this concern in mind, we decided to separate the thoughts of the LHC out from the people’s action in the comic itself. In the process of writing these fragments, it was important to think through how a self-aware machine might have different goals than humans, and to this end I wrote several fragments that diverge from the traditional science-fictional issues about machines wanting control over and from human counterparts. One unique feature of machine self-awareness that we were playing with is their desire for connectivity over control of humans; instead of being “Creators” with some biological connection to reproduction, what if machines instead were “connectors” finding non-viral pleasure instead in joining up into larger networks? If this is our experimental case, then what we could graphically illustrate is something where the LHC attempts to connect with its “siblings,” which might be the energy bursts from the sun that strike the earth daily and are of a similar power magnitude to the LHC, thus possessing some kind of affinity with it. This idea was taken from a NY Times article that discussed the possibility that the LHC would produce earth-consuming black holes, and defused that fear by saying that the Earth is bombarded by energy bursts form the sun every day that are of or greater than the LHC magnitude.

As this comic develops, we’ll also be thinking about the reciprocal connection between the humans using the LHC to produce knowledge and perhaps the LHC using people to do something of the same. Here I imagine something like Andy Clark’s “extended mind” thesis, which roughly argues that cognition does not “happen” solely in the brain, but is highly dependent on both the body and the world (the parts we mark out and use) to perform cognitive acts. For example, it is common to say that “I can’t solve that mathematical problem in my head; I need to think with my pencil.” This would be a simple example of the ways that we depend upon other resources in the world, tools or otherwise, to produce what counts as “knowledge” for ourselves. If the “extended mind” thesis holds water, however, it will have to address the difference between us using things in the world for a kind of distributed cognition, and us performing these same tasks with each other (or, possibly, using each other towards processes of cognition). For example, if we go back to multiple intelligence theory, thinking “mathematically” is not the only kind of intelligence and thus not the only activity that demands “cognition.” Surely cognition is also demanded of linguistic and social and artistic processes as well. If that’s the case, then what is to be said for bouncing off ideas on a friend in order to help you think of ideas for your new novel? Or what’s to be said for talking to a friend to analyze and trouble-shoot the confusing behavior of one of your students? Is this not also a kind of augmented cognizing—in partnership with (an)other mind(s)?

Taking these issues into consideration, the territory that our project may get into is where we find ourselves to ALSO be part of the world that something else (the LHC) thinks with, a resource for discovering its origins and finding out more about itself and its place in the world and its relations. In this respect, we would be thinking of LHC-machine consciousness less as an antagonist to humans and more of a partner in mutual knowledge-production—of course this process might not be smooth, but those bumps in the story could be the most interesting.

]]>https://hayles06.wordpress.com/2009/11/23/thinking-with-media/feed/0melodiousoneAn anti-manifesto to Digitality by Jonathan Bellerhttps://hayles06.wordpress.com/2009/11/19/an-anti-manifesto-to-digitality-by-jonathan-beller/
https://hayles06.wordpress.com/2009/11/19/an-anti-manifesto-to-digitality-by-jonathan-beller/#respondThu, 19 Nov 2009 23:46:58 +0000http://hayles06.wordpress.com/?p=342]]>Who incidentally is an alumnus of the Literature Program

The Digital Ideology
<http://digitallabor.org/speakers1/jonathan_beller&gt;
” ‘The Digital’ has become the mantra for all things contemporary and as such signals that the capitalist market is present in the very articulation of digitality. We can be sure that unless we ourselves develop an antagonistic relation to “the digital” and “digital culture” our creativity, if that’s what it is, will continue to serve that system which structurally guarantees the accumulation of wealth by a tiny minority and the intensifying immiseration of the global majority. Thus, from the standpoint of social justice, any theory of labor/value that does not reckon with structural inequality and the larger contradictions of capitalism is pernicious. ”