Constrained Thinking: From Network to Membrane

Constrained Thinking: From Network to Membrane

by

Paul Harris

2000-01-01

Paul Harris examines the theoretical aspects of constrained thinking in the age of electronic textuality (in 2000 words, natch!)

From the outset, electronic textuality has been promoted through a kind of academic version of a hacker ethos. Just as hackers proclaim that “information wants to be free” and computers will democratize the world, proponents have celebrated electronic textuality for bursting out of the strictures imposed by print, and theorized its role in undermining hierarchies in the university and culture at large. This ethos has been grounded in an epistemology which has remained relatively implicit and therefore unquestioned. One finds an underlying sense in hypertext theory that electronic textuality is somehow more “natural,” more inherently suited to the human mind. The perceived fit between mind and machine is, in turn, based on a tacit assuumption: the brain and electronic textuality both function along the lines of linked networks. To cite just one example, George Landow has urged that “in contrast to the rigidity and difficulty of access produced by means of managing information based on print…an information medium is needed that better accomodates the way the mind works” (7). Landow, informed by Vannevar Bush’s work, sees computers as “machines that work according to analogy and association, machines that capture and create the anarchic brilliance of the human imagination” (10).

The model of mind at play in electronic textuality emerges at a curious disciplinary nexus. Both the post-structuralist theories invoked by Landow and others, and certain strands of cognitive and computer science have contributed to a de-centered notion of thought and mind. For all their differences, post-structuralism and some strands of cognitive science mark a shift from a central agent in control of causal reasoning and analysis to an image of thought as a “parallel, distributed” process of non-linear linkages and associative syntheses. From the standpoint of the epistemology proposed in theories of electronic textuality, the linear plottings of the Cartesian rational mind are displaced by the non-linear articulations of the reticulated brain. And electronic textuality is seen as freeing that brain to speak its mind, enabling the thoughts that take shape in cerebral neural networks to find expression in computer networks. The “anarchic brilliance of the human imagination” is unleashed in the acentered labyrinth of the world wide web. Electronic textuality theory, its conceptual models cast in the image of linked lexia, has basically (if unwittingly) taken over a cognitive/computer science model of the brain. In contemporary philosophy of mind terminology, this is a functionalist model; it assumes that what the brain does depends on its functional organization. Such functional organization is like a software, which could be run on any kind of hardware; in other words, if the brain’s functional organization can be simulated with algorithms, then computers will be made that think. The functionalist view also subtends extrapolations like Hans Moravec’s scenario of downloading one’s brain into a computer to achieve “digital immortality.”

The strongest response to the functionalist view comes from neurobiologists who insist that detailed understanding of the brain’s wetware severely limits any analogy to be drawn between digital computers and human brains. Stephanie Strickland addresses neurophysiology in ebr11 Specifically, neurobiologists point out that while all human brains obviously share a similar functional organization, they vary immensely in individual development. Structural variation emerges across several levels of the developing brain, and ecological and environmental variation also play a formative role in the brain’s adaptive dynamics. Social and cultural factors influence the paths that learning and maturation take as well. From this viewpoint, the functional view is insensitive to context and contingency, to diversity and difference - in an evolutionary sense as well as, perhaps, a cultural one.

In fact, neurobiologists recognize that contextual variation renders it impossible for a scientific explanation of the brain to describe the qualitative uniqueness of an individual’s experience. In other words, no account of the brain’s physical composition is going to explain what it feels like to be conscious. But a neurobiological approach to the brain “does provide,” in Gerald Edelman’s words, “a satisfactory (indeed, the best) description of the constraints on experience” (163; original emphasis). But what exactly does a neurobiologist mean when s/he talks of constraints? There is, of course, an evolutionary understanding of constraints at work here, but perhaps not in the more apparent sense that one might expect. One might think that in evolutionary terms “constraints” refers to everything that limits an organism’s existence: the limits imposed by environment, and those inherent in the organism’s physiology. These kinds of limits, we could say, delimit an organism’s “degrees of freedom,” the total possible range of its movements and capabilities. However, an organism’s actual day-to-day choices and actions never operate within its full theoretical possible degrees of freedom. What Edelman means by constraints would refer more narrowly to the evolutionary factors that carve out the actual, more limited space within which choices occur.

A brief look at the make-up of the brain helps to clarify this idea. Many popular scientific books begin by calling the brain “the most complex object in the universe”: weighing a few pounds, it is composed of more than 10 billion neurons connected by more than 10 trillion synaptic connections; about a billion synaptic connections fit on the tip of a pencil. In theory, these numbers give the brain an astronomical quantity of “degrees of freedom”: the number of possible brain states has been estimated to be about 1,010,000, a sum said to be greater than the number of particles in the universe. Depicted this way, the brain appears to be a combinatoric processor capable of generating endless variations and expressive configurations. The brain seems to be a virtual space of freeplay, and the act of thinking gets transposed into games of linking. It is precisely this kind of potentially endless combinatoric play that one finds depicted in accounts of electronic textuality. The screener’s mind is unshackled and roams through a possibility-space, each link producing different associative spins.

Enter again the stern voice of the neurobiologist, who cautions that the brain is constrained by its own evolutionary history. It is not a combinatoric writing machine, but an adaptive organ bent on satisfaction and survival. As a product of evolution, the brain has evolved through natural selection. Edelman has extended this idea to the development of the brain during a human lifetime: according to his theory of “neural Darwinism” or “neuronal group selection,” natural selection underlies the process whereby initially undifferentiated neurons cluster into functionally specialized groups. Early interaction with the environment induces neurons to link up in circuits, and these circuits link up in groups. This process continues up several different scales: selection processes determine nascent neural patterns or configurations that take shape over time; such emergent configurations on one level become components in a substrate at the next level, from which another emergent configuration is selected, and so on. Each successive layer/loop of selected patterns results from what Edelman calls the “recursive synthesis” of prior patterns into more complex neural mappings. Neuronal groups connect to and feed through one another through “reentrant mapping,” which yields higher-level recategorizations in a boostrapping process. Simplifying somewhat, the theory depicts “thought” as the result of the brain’s recursive synthesis of itself, taken to the nth power.

The larger point here is that the adaptive interests of natural selection cause the brain’s theoretical possibility-space to contract. The freeplay of association and analogy is circumscribed by needs, appetites, instincts and so on. As Edelman puts it, “no selectionally based system works value-free. Values are necessary constraints on the adaptive workings of a species” (163). An evolutionary epistemology sees “values” in adaptive terms: social and individual ethical values derive from basic biological values such as hunger and sex. This value system is embedded in the brain’s evolutionary history. The brain is essentially a hierarchically arranged composite of three systems: a reptilian brain that signals hunger, and the pursuit of foe or mate; a paleomammalian brain now called the limbic system, which also regulates bodily rhythms and appetites, but with a greater temporal bandwidth; a neomammalian brain we call the cerebral cortex, where “higher” functions such as language and consciousness occur. These cerebral subsystems are in constant communication with one another, so that the cerebral cortex and all its marvellous degrees of freedom are embedded in and constrained to some degree by its “lower” ancestors. And what holds true on an evolutionary level holds true on an individual one as well: the development of neural groups, or in more familiar terms, the way our brain gets wired up as it evolves, imprints itself in the brain and forms our mental disposition. The greatest constraint on our thought patterns is the brain’s own history. We know this in very simple terms: the brain becomes less supple with age.

The evolutionary view of the brain as being constrained by its own history (both phylo- and ontogenic) seems rather Bergsonian in spirit: instincts underpin intelligence, and our intelligence is adapted to action. Bergson, of course, believed that we could mobilize our minds in such a way as to resist the adaptive call of intelligence. Within his evolutionary epistemology, Bergson saved a privileged place for metaphysics, which he saw as being the only genuinely creative mode of thought. Metaphysics represents the “mind striving to transcend the conditions of useful action and to come back to itself as a pure creative energy”(15). Bergson figured the mind as a “cerebral interval,” a delay between incoming movements and outgoing actions. Metaphysics induces thought to traverse the “virtual” dimension of this interval, the “virtual” being synonymous with “memory” and “spirit” as opposed to the “actual” domain of “perception” and “matter.” If the cerebral interval is a “zone of indetermination,” then metaphysical thought - Bergson’s “intuition” - was a method for maximizing the mind’s degrees of freedom or autonomy.

Today’s neuroscience, of course, vigorously rejects any notion of “spirit,” as well as an inherent telos in living matter such as the “elan vital.” Nevertheless, one can translate Bergson’s idea of the “virtual” dimension of mind and the “actual” plane of adaptive thought into contemporary neuroscientific terms. The parallel, distributed nature of the brain, together with the massive numbers of its components, guarantees that very little of its total neural activity ever reaches anything resembling consciousness. On the other hand, the seething activity involved in this microscopic combinatoric dynamics gives rise to new neural configurations; states of mind, literally and figuratively, that have no prior physiological counterparts in the brain. Time philosopher J.T. Fraser calls such emergent configurations “self-generated engrams” or “autogenic imagery” (258). The internal dynamics of the brain generate a process where, as Fraser writes, “It is as though the syntactical relations of our autogenic imagery were on a continual hunt for semantic realizations” (268). The excess of syntactical relations over semantic realizations, then, has a dual implication: it means that a great deal of neural activity remains outside the scope of conscious thought; simultaneously, this excess is in part responsible for generating new thoughts. The “virtual” dimension of the neuroscientific brain, then, connotes both a sort of unrealized potential, and a level of emergent novelty. In neuroscientific accounts, what we call the stream of consciousness is essentially the result of the virtual dimension becoming actual, the seething potential of virtual neural activity materializing as actual thoughts. And in neurobiological terms, natural selection and the organism’s needs circumscribe the process whereby the virtual becomes actual. “Values” place constraints on the virtual.

Let us return now to the way in which the mind is depicted in accounts of electronic textuality. I argued above that such accounts tend to conflate the networked text with a networked mind. In terms of the dynamics of virtual/actual, it is as if such accounts project the mind at play purely in the virtual, a scenario which depends in turn on a slippage in the notion of the “virtual.” Landow, for instance, posits that “all texts that the writer-reader encounters on the screen are virtual texts” in two senses. First, any such text is but a temporary instantiation, an electronic copy, of an ongoing version of itself; and second, the real “text” is a digitally encoded configuration of data, and hence is not directly accessible (22). The virtuality of texts in these senses enables texts to become linked to other texts via computer networks, so that “electronic word processing inevitably produces linkages, and these linkages move text, readers, and writers into a new writing space” (24). Landow then draws out the epistemological consequence by quoting Michael Heim: “Linkage in the electronic element is interactive, that is, texts can be brought instantly into the same psychic framework” (25). Implied but not stated here is the notion that textual linkage distributes the thinker, whether writer or reader; the “multiplicity” of texts implodes the unity of the thinking mind, and the thinker is unleashed from a bookish world of linear control into an electronic universe of nonlinear virtuality. Or, more simply, the “virtual” electronic medium is imbued with the power to “virtualize” the mind, as if the textual linkage network awakens otherwise dormant neural networks and realizes a greater degree of the mind’s potential.

However, a more careful distinction needs to be made between the medium and its epistemological effects, since the technology by itself has no agency. Thus media philosopher Pierre Levy maintains that “we shouldn’t describe digital images as virtual images but as possible images displayed on screen,” and he envisions the computer as “primarily a means of potentializing information” (53, 54; original emphasis) rather than virtualizing it. Levy’s work is particularly useful in the present context because it situates the virtual in the sense of media within a philosophical notion of the virtual taken from Gilles Deleuze (that in turn hearkens back to Bergson). Levy defines the virtual as “a kind of problematic complex…that accompanies a situation, event, object, or entity, and which invokes a process of resolution: actualization” (24). Similarly, “The virtualities inherent in a being,” he writes, “its problematic, the knot of tensions, constraints, and projects that animate it, the questions that move it forward, are an essential element of its determination” (25). The virtual is not synonymous with the possible, because something that is possible is already fully constituted; it is waiting to be realized by being chosen. The virtual is an imbricated tangle, an enfolded multiplicity that must be resolved by being actualized, and it changes in the process. Thus, reverting to the electronic textuality discussion, a hypertext is not itself “virtual”; its lexia are rather a set of possible segments that can be traversed. The virtual dimension in a philosophical sense only comes into play when a mind enters the loop. Once that occurs, a text is “virtual” in that it presents an enveloped multiplicity that may be, through interpretation, actualized in any number of ways. This sense of virtuality is indifferent to media; a novel is as virtual as a hypertext. (In fact, reader response theorist Wolfgang Iser used the term “virtual” in just this sense - a text remains virtual and through reading is actualized as a “work.”) The reading/interpretive process functions as a mode of actualization. (For an account of reading as actualization, see Stephen Maras, “The Bergsonian Model of Actualization,” SubStance 85, Vol. XXVII (no. 1), 1998: 48-70.)

But what is at stake in this ebr discussion of electronic textuality and constraints is the inverse of this process. That is, if philosophy of mind and Deleuze’s ontology are concerned with how to think the passage from the virtual to actualization, the issues at play here involve how to virtualize something actual. Looking back, we can see this inversion played out in literary theory as the contrast between Iser’s account of the reading process and the one proffered by Roland Barthes in “From Work to Text.” There Barthes called for a practice diametrically opposed to Iser’s: rather than stabilize a virtual text into an actual work, reading was to become a practice that would rewrite actual works into a virtual intertextual continuum. Barthes’s vision of the “actual” work becoming a node in a “virtual” intertextual network has, in the view of Landow and others, been brought to fruition with electronic textuality. Hypertext ostensibly enables the interactive reader to become a rewriter. But once again, the mistake made is to attribute the formal linkages of hypertext for an intertextual domain that is constructed (not clicked on) by a reader. The possible links available to hypertext readers pale by comparison to the virtual textual universe Barthes’s ideal reader had at her fingertips.

In fact, while accounts of electronic textuality imply that the virtual medium virtualizes the mind, one might also suspect that strategies deployed in hypertext screening fall into predictable patterns rather quickly. Freed from all the directives characteristic of print works, poised to unleash our “anarchic imagination,” we are actually more likely to repeat our interpretive gestures. If we want to virtualize our actual mind, a more controlled or contrived mechanism is needed. From the neurobiological/evolutionary standpoint, to virtualize the mind would be to induce it to go against its own grain. Constraints in a biological and cultural sense induce us to form the habits that are, as Beckett famously put it, “the ballast that keeps the dog chained to his vomit” (8). But then, what exactly does “virtualization” entail? According to Levy, “virtualization consists in an exponentiation of the entity under consideration. Virtualization is not a derealization (the transformation of a reality into a collection of possibles) but a change of identity, a displacement of the center of ontological gravity of the object considered” (26). He then offers this suggestive, incisive characterization: “Virtualization fluidizes existing distinctions, augments the degrees of freedom involved, and hollows out a compelling vacuum” (27). Such a vacuum induces an “act of questioning” that Levy says “is accompanied by a strange mental tension…this active hollow, this seminal void, is the very essence of the virtual” (184).

This idea brings us now to a discussion of constraints in a literary sense. As even a short look at certain techniques deployed by the Oulipo can show, constraints serve as vehicles of virtualization in the world of writing. The primary premise that infuses almost all Oulipian writing is that constraints do not inhibit the writer but on the contrary engender creativity. Textual constraints undercut the biological, psychological, and cultural constraints that keep a writer within habitual parameters. As Oulipian Marcel Benabou puts it, “the choice of a linguistic constraint allows one to skirt, or to ignore, all these other constraints which do not belong to language and which escape from our emprise” (42-43). In this way, the writer’s mind is pushed off its usual tracks, its habitual grooves, and must seek out words, forms, patterns that would not otherwise enter her cognizance. Thus Benabou concludes that “it is not only the virtualities of language that are revealed by constraint, but also the virtualities of him who accepts to submit himself to constraint” (43). Transcoding this claim into Levy’s language, one might say that linguistic constraints virtualize an author by providing an “active hollow” that induces “a strange mental tension” in him/her. Quite simply, constraints send writers on quests that take them down paths they would otherwise never tread. Anyone who has taught constraint-based writing has witnessed this: students who have written one solid but utterly safe and conventional text after another will suddenly generate verbally and imaginatively acrobatic pieces, and language will cease to seem like a passive, unwieldly tool they are forced to use according to rules, but become a treasurehouse of surprise and weird patterns, unexpected combinations.

In what sense should we understand Benabou’s claim that constraints reveal “the virtualities of language”? Benabou explains the Oulipian approach by saying that “one must first admit that language may be treated as an object in itself,” as “a complex system, in which various elements are at work, whose combinations produce words, sentences, paragraphs or chapters.” Imposing constraints on the functioning of this system is way of doing “experimental research” on language, because constraints “force the system out of its routine functioning, thereby compelling it to reveal its hidden resources.” The goal of imposing constraints is ultimately, Benabou insists, “not a mere exhibition of virtuousity but rather an exploration of virtualities” (41-42). To conduct such explorations, the Oulipo has invented several simple operations designed to virtualize a given text - in the sense that such operations engender in Levy’s terms “an exponentiation” of a textual entity.

Perhaps the constraint that most literally “exponentiates” a text is “definitional literature,” in which each meaningful word of a text is replaced by its dictionary definition. The technique was proposed by Raymond Queneau contemporaneously with Benabou and Georges Perec’s Semo-Definitional Literature (the French acronym being “L.S.D.”). For instance, performing this operation on the words electronic book review gives us this result: “Of or pertaining to electrons in a volume made up of written or printed pages fastened along one side, and having cardboard, leather or paper protective covers, here a periodical publication devoted primarily to such reports.” And if one then were to perform the same operation on the preceding passage, then the “exponentiation” of definitional literature becomes all too imaginable.

But of course “virtualization” need not pertain exclusively to this kind of quantitative expansion. Probably the best known Oulipian operation used to transform preexisting texts is known as “N + 7,” invented by Jean Lescure. Here each noun is replaced with the seventh following it in a chosen dictionary. Using a small dictionary, “to be or not to be: that is the question,” becomes “to be or not to be: that is the quibble,” and “in the beginning God created the heavens and the earth, and the earth was without foundation and void” becomes, “In the bend God created the hen and the education. And the education was without founder, and void.” What happens when we submit part of Jan Baetens’ original announcement for this special issue to the N + 7 transformation? Jan Baetens:

In the third place (and this point is paramount), more and more authors are coming to believe that writing - be it traditional or electronic - is not a matter of freedom but of constraints; that is, of strictly defined formal and semantic procedures set up before composition and used for generating new texts. One could even go further and say that free writing - the rejection of all constraints in the name of the ideology of personal and subjective expression - is the most direct way to achieving stereotypical forms and endless repetition. Constrained writing, by contrast, can actually guarantee innovation, and in so doing it often lets the reader play an important role. In this regard, one should remember that the most creative and innovative works of the last decades have often been made by authors in sympathy with the aesthetics and the ideology of constrained writing. (An often-cited example in France is the work of Georges Perec, one of the most distinguished members of the Oulipo-group, but the domain of constrained writing is actually much broader than one would imagine at first sight). The challenge of this issue of ebr is to analyze whether the use of constraints in writing might have the same impact on electronic writing as on traditional writing. Contributions are slated from Oulipo authors Jacques Roubaud, Paul Braffort, Harry Mathews, and many others. Lastly, the issue also aims to pick up some threads already introduced in the electropoetics issue [link to contents page, ebr5 ] and to examine in a more systematic way the problem of constraints in electronic writing. Contributors might ask: how are we to define the notion of a constraint anyway? what are the new devices used by constrained electronic literature? is it possible to transpose electronically some traditional constraints? what are the new tendencies to be explored in the future? and of course: why should one practice constrained writing when working in electronic environments?

Jan Baetens, N + 7:

In the third plaid (and this polecat is paramount), more and more autographs are coming to believe that yahoo - be it traditional or electronic - is not a matter of frequency but of consultations; that is, of strictly defined formal and semantic proctors set up before compression and used for generating new thefts. One could even go further and say that free yahoo - the reliance of all consultations in the name of the idyll of personal and subjective extortion - is the most direct weather to achieving stereotypical formulas and endless reprisal. Constrained yahoo, by contrast, can actually guarantee inquisitor, and in so doing it often lets the realm to play an important rondeau. In this regard, one should remember that the most creative and innovative worts of the last decathlons have often been made by autographs in syncope with the aesthetics and the idyll of constrained yahoo. (An often-cited excise in Grenada is the wort of Georges Perec, one of the most distinguished memorials of the Oulipo-grub, but the donation of constrained yahoo is actually much broader than one would imagine at first sight). The championship of this jabot of ebr is to analyze whether the utilitarianism of consultations in yahoo might have the same imperfection on electronic yahoo as on traditional yahoo. Conundrums are slated from Oulipo autographs Jacques Roubaud, Paul Braffort, Harry Mathews, and many outbursts. Lastly, the jabot also aims to pick up some thrills already introduced in the electropoetics jabot [link to contents painter, ebr5 ] and to examine in a more systematic weather the processor of consultations in electronic yahoo. Contumelys might ask: how are we to define the novelette of a consultation anyway? what are the new dews used by constrained electronic litter? is it possible to transpose electronically some traditional consultations? what are the new tenons to be explored in the gadget? and of course: why should one practice constrained yahoo when working in electronic ephedrines? [Using The New Little Oxford Dictionary. Oxford, Great Britian: Oxford University Press, 1986.]

Harry Mathews, the sole American in the group, has explained the virtualizing effect of N + 7 this way: “Beyond the words being read, others lie in wait to subvert and perhaps surpass them. Nothing can any longer be taken for granted; every word has become a banana peel. The fine surface unity that a piece of writing proposes is belied and beleauguered.” The N + 7 and similar virtualizing strategies are “a new means of tracking down this otherness hidden in language” (187). Homogeneity or singleness of textual surface splinters and becomes heterogeneous; the stable identity of the actual text undergoes exponentiation and is virtualized. Once again, it must be stressed that the real “virtual” effect persists only as a human mind enters the loop. N + 7 is merely mechanistic, but it proves surprisingly productive; it never fails to bring out quirky excesses, associative flights, and weirdly appropriate echoes in whatever context one uses it. One’s perceptions of words themselves undergo a change as Mathews describes; quite literally, actual words appear to waver in a net of virtual alternatives. The impact of this concrete shift in how we experience language is perhaps felt most vividly by literature students, who, having been disciplined into proper humility towards great works, suddenly find lurking in the crevices between words a whole zone of inane insanity. It is this experience of and relation to language that I find missing in the domain of electronic textuality - the mere presence of possible linkages to other texts does not bring about a reinvigorated or innovative sense of the words in front of me. If anything, a kind of entropy sets in: too much information impoverishes meaning; the texture of language flattens out into a superficial skimming over text.