Monday, May 30, 2011

It’s been two years since James Bridle produced the first volume of his life in tweets, probably the first Twitter memoir in print. There had been books of tweets before, and there have been many more since – some of them reviled, some of them inspiring a certain amount of justifiable awe, all of them conspicuously and sometimes precariously balanced on the (l)edge, the fluid boundary where the old medium ends and the new one begins.

The two towers under construction on the cover of Masha Tupitsyn’s latest book of and on film criticism are an explicit homage to Ludwig Mies van der Rohe, whose famous declaration that ‘architecture starts when you carefully put two bricks together’ serves as the fittingly aphoristic model for Tupitsyn’s approach to constructing the book. Each of the 1,200 tweets that make up the edifice of Laconia is a carefully placed brick, and the book itself, the authors explains, is ‘in essence, an architecture of thinking’ (1).

At first reading I was a little put off by this introduction. I wanted the book to do the things that Tupitsyn said it would do, rather than announce it would do them. I suppose it doesn’t help that I am a compulsive reader of introductions. But I’ve since come to view those theoretical statements as the necessary foundations of Laconia, the key to its becoming an object to think with.

A book is a thing, even as it most often invites us to think of its content as abstracted knowledge not bound to its physical carrier if not as a matter of historical and economic accident. A Twitter feed, on the other hand, is nothing if not an abstraction. We would never point to the server where it is stored in the form of electromagnetic patterns – if we were even able to locate it – and say ‘there it is’. Instead we call it up on our computer screen by means of a highly coded, almost ritualistic invocation, our new kind of magic. There it is. And so Laconia is also the desire ‘to make something tangible and resonant in an increasingly immaterial world’ (4).

But on one count Tupitsyn is incorrect: Laconia, as of the time of this writing, doesn’t quite exist ‘in two places at once and in two different forms’ (4). Its original instantiation, the sequential Tweet feed used to compile the book, has either become functionally inaccessible or vanished altogether. Not only it is no longer possible to find an access point to the feed itself, but you would search the Web in vain for those 1,200 tweets, except – ironically enough – as fragments of Laconia’s searchable entry in Google Books.

And so we come to that paradox that the Bridle experiment had already highlighted from the moment it came into being: namely, how the medium of print, even as it is more and more feverishly being declared obsolete, becomes a repository and keeper of native digital information, a place where bits come to be saved.

This has some further implications due to the peculiar nature of the Twitter feed, for which I’m going to gesture – with the intention of expanding on it very soon – towards the idea of a bottomlesstext. As a narrative that in the reading slowly unravels backwards in time, the Twitter feed is interestingly implicated in our always evolving concept of memory (think Memento rather than, say, Benjamin Button). Except one soon comes against the medium’s own amnesia, in the form of Twitter’s footnote to most searches that ‘older tweets are temporarily unavailable’, where ‘older’ often means as little as two weeks ago. The brilliantly cryptic and elliptic wording suggests that it’s not that the information has slipped out of the archive, but rather that it is there, but cannot be retrieved for the moment. Thus the foundational myth of the age of computing – that digital information lasts forever – isn’t completely undercut (the text is still bottomless, if only in theory), but just deferred to an unspecified future, possibly the same future in which we are to reap some of the long-promised transformative social and economic benefits of the new medium.

However to evaluate Tupitsyn’s work solely as a printed book, with its comforting physicality and its rough edges, would be just as limiting, for Laconia as a rhetorical experiment does indeed exist between media, in an interstice from where it interrogates writing, and writing about film more specifically, as a heuristic process.

LACONIA doesn’t just dramatize the thinking process. It dramatizes the act of thinking through film. For me, film isn’t simply what I think about or write about, it’s the medium through which I discover and articulate what I think. (2)

In this respect the practice of daily writing and the platform that enabled this sustained process (criticism as a form of living, as Tupitsyn calls it in the dedication to film critic Robin Wood) is the book’s true subject, and Laconia’s economy of expression – itself a counterpoint to the ritual complaints that the web in general and Twitter in particular are primary producers of meaningless chatter – becomes the lens through which to observe other acts of making culture. ‘What is it that we need to say and what is it that we don’t,’ asks Tupitsyn, and ‘how much “art” do we really need’ (2)? Regular readers will now just how fond I am of both of these questions.

While the introduction is very quotable, it is more difficult for the reviewer to usefully sample the content of the book proper, for Tupitsyn’s aphorisms curiously resist being taken out of their discursive context. I thought it might be apt to offer some of them as images, in the way that Twitter conversations are often presented inside of blog posts, adding a layer of unsearchability to their notorious aversion to being archived, but these are not meant to be any more representative than any other more or less random selection. The style is less cutting than Morando Morandini’s – to mention another critic with a highly cultivated and near-legendary gift for brevity – and reminded me rather of Philip Matthews, whose Twitter feed and blog are excellent examples of the genre in both the epigrammatic and, occasionally, the long-form style. Another point of comparison might be the near-daily, ultra-minimalistic vignettes of If We Don’t, Remember Me. Thematically, the principal effects are to highlight form, politics and gender, along the axis of the key transformations in the cinema of the last four decades. And it’s a text that hangs together surprisingly well, off the web, pruned of the retweets and the mentions and the interjections, that is to say of the social in the social network. In other words it’s a very good read, which is something I wouldn’t want to get lost amidst all the theorisin’.

We’ve heard this week an excellent discussion by Zeynep Tufekci on the re-emergence of oral psychodynamics in social media, framed as a response to Bill Keller’s invective contra Twitter in The New York Times. It was an effective take-down, but Tupitsyn’s book suggests that social media are every bit as much about reasserting the primacy of literacy as they are about heralding secondary forms of orality, and reminds us that Twitter is also a vast real-time writing experiment. Explore on any given day a top trending topic without actual news value (say, #junewish) and you’ll find hundreds of variations on a theme, exercises in style and rhetoric through which the respective authors discover and articulate what they think. It’s nothing less than criticism of life as a form of living, and it doesn’t always need to become a book for us to say that it has value.

In keeping with the mandate to advertise my adventures in publishing outside of this place, the Bill Direen-edited Brief #42 is out, and I can be read therein alongside the likes of Scott Hamilton, Jack Ross and Pip Adam (Pip Adam!).

Monday, May 23, 2011

Isolator is a small menu bar application that helps you concentrate. When you're working on a document, and don't want to be distracted, turn on Isolator. It will cover up your desktop and all the icons on it, as well as the windows of all your other applications, so you can concentrate on the task in hand.

I like very much the idea of a piece of software that puts everything out of focus except for the document that you’re working on. Equipped also with a pair noise-cancelling headphones, one could really get some work done. Except I think that very soon I would start obsessing about those blurry symbols, those muffled sounds. I’d want to know what goes on in the space at the edge of my attention, precisely because it has been artificially suppressed. I’d want to know what it is that I’m concealing from myself.

But then Nicholas Carr thinks that I have a problem. The internet is impairing my ability to concentrate and therefore to think in linear, analytical fashion. Always craving the stimulation of a thousand information sources, I’m no longer able to read a single text for long, uninterrupted periods of time, and engage deeply with its subject matter. He has a name for the place I inhabit: the shallows.

It is not an altogether new idea. Besides Carr’s own intensely debated 2008 article for The Atlantic entitled Is Google Making Us Stupid?, one could cite Maryanne Wolf’s research on the reading brain in Proust and the Squid, as well as Mark Fisher’s observations in Capitalist Realism on the difficulties of some of his students to cope with substantive written texts, later expanded to include some symptoms of his own:

I know that I would be more productive (and less twitchily dissatisfied) if I could partially withdraw from cyberspace, where much of my activity - or rather interpassivity - involves opening up multiple windows and pathetically cycling through twitter and email for updates, like a lab rat waiting for another hit.

This chimes very precisely with some of Stephen Judd’s recent self-observations, which also employ the image of the lab rat (as indeed does Carr on page 117 of The Shallows). Perhaps most troubling of all is Fisher’s reporting in the blog post cited above of the case of the father of one of his students, who told him in despair ‘that he had to turn off the electricity at night in order to get her to go to sleep.’ Here the search for a non-metaphorical isolation results in a highly symbolic gesture designed to make the information age and modernity itself go away, if only for the space of one night.

There are many more examples one could bring up, but I think that these are enough to suggest that the stuff is real. We may not all quite feel the way as Carr, Fisher or Judd, but self-reported experience isn’t without merit or value, and besides I would suggest that all of us may at least have an inkling of what they are talking about. I know for instance that I have been quite deliberate in never fully embracing Twitter, precisely in that I am wary of the consequences of opening yet another channel. For much the same reason, you won’t see me carry a smartphone any time soon. And my writing in this space too is regulated by a discipline that seeks to counter some of the pressures of the medium, especially the one to speak often. I feel the need not only to switch off, but also to be less intensely connected generally; and even if this reticence applied to a tiny minority of internet users, it wouldn’t make it any less real or meaningful or worthy of comment and analysis.

For these reasons, as in the case of Jaron Lanier’s You Are not a Gadget, I was initially well disposed towards Carr’s book; and yet in this instance too I was ultimately frustrated by it.

I happen to think that the manner in which we articulate our critiques of digital ideology is going to be crucial not only for how we grow cyberspace and resist its corporatisation, but also for our politics. That’s why we need analyses that enrich our understanding of its fluid and tangled phenomena, as opposed to reducing historical, technological and social change to a set of comforting and mutually exclusive binaries.

In Carr’s case, it’s the deep vs. shallow dichotomy. The literacy promoted by print culture, following Maryanne Wolf, is where one finds depth, that is to say, the means for long-time learning and reasoned, linear argumentation. Therefore the internet must be the place that leads to forgetfulness, shallow thinking and muddled logic due to the fragmented, constantly updating, forcefully distracting nature of its un-literacy.

The two prongs of the argument are Marshall McLuhan and neuroscience, and operate in seamless unison. So whereas McLuhan understood the transforming power of media to operate primarily at the level of epistemology, not neural circuitry, Carr claims that the taking old of the ‘new intellectual ethics’ of the Internet is synonymous with the rerouting of the pathways in our brains (pages 3 and 77). Indeed in a couple of key passages I half expected Mr McLuhan to walk into the shot and exclaim ‘You know nothing of my work!’ Not because those neurobiological implications are wholly absent in Understanding Media – whose original subtitle after all was ‘The Extensions of Man’ – but because they are secondary to the notion of media as metaphors that organise thought.

Having framed as ‘the crucial question’ what science can ‘tell us about the actual effects that Internet use is having on the way our minds work’ (115), Carr finds that yes, of course, science confirms the hunch: experiments have shown heavy internet use to shift the areas of activity in the brain, and reinforce certain processing functions at the expense of others. While the breadth of experimental evidence is impressive, Carr never interrogates the nature of the data, nor question the researchers’ assumptions as to what constitutes comprehension or learning. “Studies show…” is his default, un-nuanced position. This is also true of the experiments that may give us a little pause, such as the one that has suggested that spending time immersed in nature can sharpen our powers of concentration but that the same benefit can also be gained by staring at pictures of nature from the comfort of one’s own home. (With the possible, unstated implication that so long as you download some sort of bucolic screensaver, you’re good to go.)

When Carr turns his attention to the epistemological question, that is to say how media – new and old – are implicated in how a society constructs and expresses its ideas about truth, the conclusions are less clear-cut and the exposition a little, well, shallow, leaving one to wonder whether the author came to Cartesian dualism by way of Wikipedia, or the extent of the deep thinking that underlies some of his central claims. Thus for instance the proposition that

[w]hen people start debating (as they always do) whether the medium's effects are good or bad, it's the content they wrestle over (2)

would appear to be contradicted by Carr's requisite, diligent and extensive treatment of Plato’s argument against writing in the Phaedrus – the granddaddy of all debates on the effect of media – which is in fact preoccupied exclusively with form. Similarly, the claim that ‘[t]he intellectual ethic of a technology is rarely recognized by its inventors’ (45) is pointedly belied as far as the internet is concerned by the towering figure of Norbert Wiener, the father of cybernetics, as well as Tim Berners-Lee’s.

Other pronouncements are little more than irritating, unargued clichés. Thus we are informed that ‘[a]s social concerns override literary ones, writers seem fated to eschew virtuosity and experimentation in favor of a bland but immediately accessible style […], and thus ‘[w]riting will become a means for recording chatter’ (107), or that ‘[o]ur indulgence in the pleasures of informality and immediacy has led to a narrowing of expressiveness and a loss of eloquence’ (108), something that no doubt will come as a surprise to your friends who are honing their aphoristic skills on Twitter, or their broader array of rhetorical skills on blogs and discussion forums.

This last point in fact is a key to understanding the limits of the book’s perspective, for Carr almost always takes the media consumer to be a reader – as is the case with the near totality of people in the medium of print – as opposed to a writer; whereas in fact one of the most notable features of cyberspace is that it makes writers of many if not most of its users. And so in order to make any sort of informed, useful statement on the relative depth or shallowness of the new medium, one would have to evaluate their literacy not just in terms of reading, but of writing as well, and seriously examine the kinds of knowledge produced across all media over the last two decades, a task in which science is unlikely to be able to supply the necessary value judgments.

What’s required is cultural work of the most serious and pressing kind, and whose outcome is difficult to predict. We might find that our twitchy interpassivity has insinuated itself into the deep language structures of the web, curtailing our capacity for expression; or that new forms of rhetoric have begun to emerge to match the repackaging of the world’s pre-digital knowledge into a single, infinitely searchable platform. In the meantime, it pays to heed the symptoms reported by Carr and the others, while we still can, before they become an invisible part of the experience of being awake in a world that is digital, and not dismiss their subjective experience, that feeling of vertigo: it may yet hold a key to understanding and designing out some of the most insidious aspects of the new medium, and therefore of our new selves.

Nicholas Carr. The Shallows: How the internet is changing the way we think, read and remember. London: Atlantic, 2010.

Monday, May 16, 2011

The local section of the newspaper that I picked up before boarding my flight back to New Zealand includes a full four pages of mortgagee sales published by order of the tribunals of the court of appeal of Milan. Each of those adverts – about the two-bedroom apartment in via San Giovanni Bosco complete with cellar and garage, or the basement workshop in via Francesco Brioschi – tells a story that clashes with the image of Milan as the industrious and prosperous capital of a region that is firmly coupled to the locomotive of Europe; a region that combines the creativity and capacity to innovate associated to Italy-the-brand with the efficiency and raw GDP power of Germany.

Perhaps Milan has been all of those things, but it is also, and altogether more transparently, a city that banks on its past and continues to build as if it was booming in the hope that the people will come and vindicate (that is to say, rescue) that narrative.

The streets near my mother's house have been disembowelled to extend the storm-water drains and the sewers needed by CityLife, the vast new development next door, on the grounds of the old trade exhibition complex to which the suburb still owes its name, Milano Fiera. With a typically Lombard entrepreneurial haste, at the end of Mum's road they're building the fifth line of the underground while the fourth still languishes in its planning stages. Whereas the word factory survives mostly in its metaphorical sense, the city is a construction site literally as well as figuratively: and not just the place where we plan for the Expo of 2015 or erect ever steeper walls against outsiders.

Lega Nord is one of the key forces in government, both nationally and locally. The poster above, whose slogan I'm sure I don't need to translate for you, is the only piece of electoral advertising that they have put out for the mayoral election scheduled (at the time of my writing this) for today and tomorrow. Nothing about the business of governing; nothing about the party's views on economic development or social policy. Two words are the summation of an entire worldview: we want to keep being us, without them.

It is not just the wilful ignorance of history that staggers – the Milan of the economic miracle, of Rocco and His Brothers, was a city whose wealth largely rested on integrating and exploiting immigrants and their labour – but also that this message is not only tolerated by the community, but can be reliably expected to translate into votes. Just how many, and for just how long, are both pertinent questions, but there can be little doubt that the strategy will work this time too, and provide the Right-wing coalition with the votes it needs to retain power (probably) or lose but still split the vote right near the middle. Meaning that even in the most optimistic of scenarios, Milan will remain the capital of intolerance that I discussed here and reviled here.

Nothing that I have seen on this last trip persuades me that the city has earned the right to reject that title quite yet. But the experience of witnessing the last two weeks of a crucial campaign fought concurrently on the national stage was captivating nonetheless. And since I happened to do a lot of walking – which is something I still enjoy a great deal in the old hometown – I ended up with a rather large collection of images, some of which I want to share with you.

The theme of immigration and multiculturalism ploughed into by the Lega Nord poster was but one of the dominant threads, which also found expression in the occasionally puzzling offerings of prominent journalist and reformed Arab person Magdi Allam, for whom the aim of making the city once again a capital of integration

means that we should never see again images like this one:

Manolo Lusetti of Berlusconi's party won the trophy for the crudest signifier of a candidate's religious affiliation.

While the Democratic Party's campaign for the candidate of the Left-wing coalition hang the slogan Long live the Milan that is not afraid above the head of the great late-18th century reformer Cesare Beccaria, author of On Crimes and Punishment.

However I was struck most of all by what I'm tempted to call the material semiotics of advertising, meaning not just the political messages themselves but also their physical layering and how they were interfered with.

It was not infrequent for a single one of the special hoardings scattered in their hundreds around the city to carry as many as a dozen posters, one covering the other and producing the millefeuille effect pictured above. Elsewhere the posters were not as neatly superimposed but overlapped one another, mixing faces and slogans and parties to the various coalitions.

Other times, angered vandalism broke down the messages beyond recognition,

or created a Russian doll-like arrangement of Berlusconi heads, featured on posters for and against,

or regaled the passer-by with a wryly nihilistic commentary on the state of the political conversation.

Yet other interventions played more subtly with the candidates' image, as in the case of this poster for Giulio Gallera, council group leader for Berlusconi's party (and who is one day going to be remembered for having gone to school with me), on which some witty soul placed a sticker declining the first person singular of the verb 'to mafia'. I'm not going to lie to you: I rather enjoyed this.

But the single most emblematic aspect of this material semiotics was to me the occasional surfacing, stuck to the metal of the hoardings themselves, of fragments of posters form other elections, glued so well as to be impervious to ripping either by vandals or the election officials who retire the displays at the end of each campaign. It's chiefly in that form that the old symbol of the communist party, whose gradual disappearance I touched upon here and especially here, makes an understated, (sub)liminal reappearance, as if to remind us of the language of another politics, not all that far removed in chronological time yet officially consigned to the status of archaeological detritus and simultaneously of ghost to be evoked whenever it behooves one to demonise the least pusillanimous factions of the Left.

It is because I am convinced that progressivism in Italy needs to connect with that erstwhile mass movement and its aspirations that I was pleased when Giuliano Pisapia won the nomination as mayoral candidate for a broad coalition of the Left against the incumbent Mayor, and looked forward to hearing Nichi Vendola, leader of Sinistra, Ecologia e Libertà – the party that more than any other lays claim to that lineage – speak at the Arco della Pace a week ago in support of Pisapia.

That the supposedly conservative electorate of the city chose a candidate from the supposedly unelectable leftmost fringes of our party system was pleasing enough. That the coalition centred around that candidacy has forced Letizia Moratti, the incumbent, to spend 12 million euros on a re-election campaign that she ought to have barely needed to show up for – thus disproving those ideas about who is and isn't electable on the left of centre – doubly so. But I was especially impressed with how Vendola spoke in that historic venue, where I had heard Sardinian communist leader Berlinguer address a much larger crowd just shy of thirty years ago, when I was a child. Another man from the south who had come to Milan, the city that already fancied itself back then as 'Mittel-European', as well as the guarantor and custodian of the country's modernity itself, to speak of an alternative and in some respects antithetical notion of progress.

Vendola spoke of a Mediterranean that is not just a watery graveyard but also a political laboratory, and articulated a reformist project based on integration and environmentalism, on equality and the fundamental interrogation of the wealth of our nation and the value of our work. It was the most lucid speech I have personally heard an acting politician deliver in years. This man from the south, this openly gay man from the supposedly retrograde, close-minded south, currently serving his second term as governor of Puglia – one of the regions where the war that we wage against those poorer than us is at its most visible – came to the capital of Berlusconismo, with its mix of miserly self-interest and rampant xenophobia, and gave us a lesson on how solidarity is central to the notion of what it means to have an economy, and on how culture and even ideas about beauty intersect with the political in fundamental ways. All rhetorical flourish aside (not that I had a problem with that either), it was enormously pleasing to hear a national politician unafraid to tackle sexism or rape culture, or to come out in favour of general strikes. We have been told for a very long time, and not just in Italy, that those are third rails and therein lies political death.

I don't think that Milan will fall, not quite yet, but if it does, it will be a turning of the tide, a moment to be seized. With all my well-documented anguish for the present and pessimism for the immediate political future of my country of birth, I have never completely lost faith in our capacity to be some day that political laboratory, and fashion workable alternatives to what Vendola calls the 'feral globalisation' that is the unassailable paradigm of this stage of capitalism. We have a history of asking the right questions – a history whose signs can sometimes be recovered by scratching away at those hoardings.

PS I wrote this on a plane back to New Zealand via Singapore, at the same time as Francesca at Buchi nella sabbia was writing this lovely and absurdly flattering piece about this blog - to which, as I am currently in transit, I cannot muster a response other than linking gratefully to it. I was rather moved, not least by seeing one of my posts translated into my own language. Wonderful stuff.

An update is also in order: the results are in and they are looking very good - the Left is within sight of a victory that could be quite decisive.

Tuesday, May 10, 2011

I was off-planet when they killed bin Laden. It was a long flight, granted, yet it seemed extraordinary to me that by the time I landed and caught up with the news, the slain al Qaeda leader had already been buried at sea. I had missed the developing story, just like on 9/11 when, due to the time difference, my partner and I woke up to a terse, horrifying bulletin once everything had already happened.

On this occasion, the news seemed to have been packaged as conspiracy theory in the making: a swift killing followed by an even swifter disposal of the body. Details already shrouded in the reticence and obfuscation of the secret-keepers seemed to overlap and contradict one another. Was bin Laden armed? Did he struggle or return fire? First, yes, then, no. Obama had referred repeatedly in his statement to the nation to his directive that bin Laden should be 'captured or killed'. It seemed hard to credit the thought that the former option had seriously been entertained. More so when it emerged that defence secretary Gates' proposal for the operation was to rain down a few dozen 2,000-pound bombs on the compound, thus 'bringing to justice' not only anybody who might be living with bin Laden, but his neighbours as well.

That the administration saw it fit to release this detail of the preparations speaks volumes of the level of acceptance not only of summary state executions, but of collateral civilian deaths as well. However I am intrigued particularly by the manner in which bin Laden's body was disposed of, and what it says about the culture in which this event took place.

According to the most credited timeline, as little as nine hours elapsed from the moment the US Navy Seals entered bin Laden's compound and the time when his body was washed aboard the USS Carl Vinson in preparation for its burial at sea. In the mean time, depending on which wording of the official accounts you follow, either the American authorities unsuccessfully shopped the body around to a series of Muslim countries, or they came to the conclusion that such efforts would have been in vain. The latter explanation would indicate that the burial arrangements were planned in advance, which seems rather more plausible to me than the alternative. But that there is a narrative out there about officials scrambling to find a taker for the corpse and resolving to jettison it from an aircraft carrier is not altogether insignificant, for it speaks of a panic which undercuts the clinical precision of the operation and the moral certainty of its architects.

The need for such a speedy burial at sea was supposedly two-fold: to adhere to the Islamic custom according to which the deceased must be buried within 24 hours of the time of death; and to avoid the burial place becoming a shrine for the supporters of bin Laden and his ideas. The administration's insistence that everything was done according to Islamic tradition (whatever that might mean – it seems as vague a concept to me as 'Christian tradition') has been disputed, and some commentators have pointed out that an unmarked grave, as well as being consistent with custom, would have done the trick, shrine-wise. However there is at least one notable historical precedent here. Benito Mussolini was buried in secret in plot 384 of the Cimitero Maggiore in Milan following his summary execution and public display in April of 1945, but news soon spread and the plot did indeed become a meeting place for nostalgics of the regime, until in 1946 the body was stolen by three members of the neonate Fascist Democratic Party. The thieves eluded capture for two weeks, during which time the dictator's remains acquired in the popular press the name of il salmone, a play on 'big corpse' and 'salmon'. Finally il salmone was returned to the family, who buried it in Mussolini's birthplace of Predappio, nowadays a tourist destination in which the sale of fascist-themed souvenirs wasn't banned until 2009.

The vicissitudes of Hitler's body are also instructive. Hastily and not altogether successfully cremated by his own men so that they wouldn't become a trophy for the Soviets, the Fuhrer's remains were seized and repeatedly exhumed and interred by SMERSH before the eventual burial in an unmarked grave in Magdeburg. In 1970, they were exhumed again, along with those of Göbbels and his family, then thoroughly crushed and scattered in the Biederitz river, while a fragment of his skull, whose authenticity has been repeatedly questioned, was displayed in 2000 at the Federal Archives in Moscow for the benefit of the tourist/voyeur. Of interest for today's proceedings is also the fate of Göring et al., meaning the nine other high officers of the Nazi regime hanged at Nuremberg whose ashes, as Bill Palmer reminded us last week, were scattered in the Conwentzbach river, a tributary of the Elbe, under the authority of the Allied prosecutors.

These river rituals were salmon-proof. They were designed to remove all traces of form and materiality, to erase the concrete presence of those men from history in a manner that bore echoes of the Final Solution itself.

Post-war Europe was built also on the scattering of those ashes. It was the manner of our unbecoming: we exorcised the symbols as well as the remains of our monsters. However I think we would do well to interrogate this magical thinking – did we really think that the waters of the Biederitz and the Conwentzbach could dilute the essence of the Nazis, of Nazism? – and the extent in which it lives on in this latest action, casting upon it a shadow not of immorality, but of unreason.

Water and the sea have a long and powerfully symbolic association with memory and death in Western culture. It was the river Lethe that made the spirits of the dead forgetful in their journey into Hades, and washed away the sins of the residents of Purgatory in Dante's Christian reinvention. It was the Atlantic that drowned Ulysses and his crew as they set forth in search of virtue and knowledge, the sea that lapped at Keats' pebbled shore of memory. As Jean Delumeau reports in his seminal study of fear in Western culture, for some fifteen centuries a belief was held in Europe that the souls of those who died at sea would be condemned to wander until the Church intervened with the appropriate rituals. These included, as late as the mid-Twentieth century, the placing in the house of the deceased of a small cross made of wax and covered with a white cloth, as well as other simulacra designed to replace the body, the shroud, the coffin and the tomb. And of course the cradle of the West, the Mediterranean, is today, in the words of Nichi Vendola, the non metaphoric 'liquid grave' of the over 16,000 largely unnamed migrants who failed to safely reach the shores of Empire.

And so when we say that we buried Osama bin Laden at sea in a manner that was respectful of his culture, we should bear in mind those aspects of ours: the complex layers of understanding of what happens after we die and how our remains and the manners of their disposal are seen to embody our continued social existence, the memory of us. In their carefully planned operation, which was not entirely devoid of ritual, the American forces determined that bin Laden should be buried at sea and the area itself should be kept secret, as if he was a monster (our monster) that needed to be killed twice in order to properly erase his memory and prevent his return.

We gave him an unmarked watery grave: just as another boatload of refugees from another war against a former friend of the West narrowly escaped theirs.

Monday, May 2, 2011

When Antonio Gramsci chose the title for his short-lived periodical La città futura, the future city, he wasn’t actually referring to urban development, but using the word city as a synonym for society. This might strike us as an odd choice, given that Italy in 1917 was still largely rural, but it is consistent with the etymology of the word, which – as it does in English – derives from civitas, meaning ‘a community of citizens’, as opposed to the older Latin noun urbs, from which we get urban and means a city proper. Nowadays in most Western countries – and many non-Western ones as well – the synonymity is all but an empirical fact, and so to imagine a future city means also to imagine what life will look like thirty, fifty, a hundred years from now.

Auckland 2000, by Bernard Roundhill (1956)

I am not nearly well-read enough to state this with confidence, but I suspect that in Western culture these imaginings were a lot more common in the first half of the last century than they have been ever since. A read through of the history of the future section at Ptak Science Books makes this hunch seem more plausible, with its wonderful gallery of inspired, inspiring and occasionally weird visions: floating suburbs, giant blocks of skyscrapers tightly packed together, railroads coiling into the sky; but so does reminding oneself of the great social housing projects of that era, which held the promise of a fundamental reinvention of the city and of the attendant notions of community and citizenship without recourse to science-fiction tropes.

Whether the shrivelling of these visions means that we are resigned to the classic 20th century grid, and to cities that sprawl either horizontally or vertically, but always subject primarily to the flow not of people but of motor cars; or whether it is a subset of a more general incapacity to imagine social, political and economic alternatives to the status quo, I do not know (although projecting the status quo into the future produces ruptures of its own – more on which at the end of this post). But especially in New Zealand as we contemplate, for now as spectators, the reconstruction of Christchurch, as well as public investments in the economy that are almost exclusively geared towards roading, this lack of imagination is brought into much sharper relief.

In Christchurch of course it was nature that was the philistine, nature that destroyed or condemned places of memory, history and character, causing the loss of ‘Christchurchness’ wonderfully described by Cheryl Bernstein. But there remains the significant challenge of what to restore and what to build anew, and according to which vision of the future city. Even at this early stage it’s hard to feel optimistic about the eventual outcome of this process, what with a government for whom the word “progress” is synonymous with “more roads” (a historical obsession already lamented over half a century ago by WB Sutch), and a transport agency with a penchant for bullying local administrations and seemingly unable to leverage its multi-billion dollar budget into the ability to locate Patricia Grace’s number in the phone book. A bumbling, piecemeal development modelled on conservative views of economics and social relations is what recent history teaches us we should expect, and it will likely take a significant effort of imagination and persuasion to change that particular script.

However, and again, this is not to say that the phenomenon is peculiar to New Zealand, and reading Owen Hatherley’s Guide to the New Ruins of Great Britain (now available in trawl form) is also an education on the failure of the age to produce liveable cities – a failure that contrasts with the neurotic sophistication of our personal designed lifestyle: our gadgets, our clothing, our homes, for which the market is flooded with solutions. Whereas the city, and, by extension, society, remains a problem. Thus your home becomes an escape (if suburban/exurban) or a retreat (if you live downtown).

The digital networks are another such escape. Or rather, the place where work, society, lifestyle and the new architecture of the world’s knowledge come together. They are a place inside the home, or increasingly they permeate the city and its hubs (the airport, the library) thanks to the omnipresence of wi-fi. But what I want to put forward is that in fact the internet is the opposite of the city: a place with no geographic coordinates that connects us instantly with the close-by and the remote alike, and where it makes no sense to say that you are neighbours with somebody; a place without roads – the fortunes of the highway metaphor notwithstanding – or a road code or utilities; a place where community can mean any arbitrary grouping of people, and where the idea of citizenship isn’t bound by anything other than the means of accessing the space. In fact the internet can become a city only when it is explicitly imagined as a city (as in Second Life, or Neal Stephenson’s Metaverse).

Yet the problem of how to do future-oriented architecture applies to both the internet and the city equally. Both of these places can and in fact should be where our social imagination primarily takes shape, but instead, it seems to me, both are preoccupied more often with subsuming the past into the present. Don’t just think postmodernism. Think of the perpetual now of Venice, or how the humble Foxton recycles its heritage simultaneously as lived social history and historical fiction to be consumed by the tourist; and on the internet side, think of the remarkable popularity of sites like How to Be a Retronaut, where the past is stripped of its context and endlessly mined for its reserves of timeless fashion iconicity or as an object of ironic reinvention. And while serious urban speculation is certainly alive on the web – as a layman I find BLDGBLOG of particular use – it hasn’t spread into the popular imaginary to nearly the same extent as these past-oriented mash-ups and games.

Poster for the inauguration of the second line of the Milan underground, 1969

When I was a teenager in Milan the city was plastered for several years with posters informing us that the third line of the underground transport network was being built. La linea 3 avanza, the third line marches on, was the slogan. It was the only instance I can recall of a public, visible statement on the future of the city, and it referred to something that was going on below street level. It’s not that change wasn’t happening – it was, and was generally awful – but it strikes me now that we both lacked the vocabulary to describe it and the lucidity to oppose it, even as we periodically steeled ourselves to defend the centri sociali from harassment and closure by the police. And so it took three decades for the municipality to replace the old, disused Alfa Romeo factory next to the house where I grew up with the ghastly exhibition centre that still carries its name, Il Portello, and yet our opposition to that civically derelict project was as confused and disorganised as its planning.

Perhaps an answer to these failures of the imagination is to write the future onto the city itself. There are a couple of examples that have recently intrigued me. Firstly, the tsunami safety demarcation lines that the Wellington City Council put in place in Island Bay earlier this year, just one week before the earthquake in Christchurch and eight weeks before the devastating tsunami in Japan.

The lines were met with some local opposition, although the fear that they might affect property prices soon gave way to the very stark sense of their usefulness to the community. Besides its practical value, however, the project is remarkable for how it allows you to visualise a possible future of the city: on one side of the lines, safety and the preservation of the historical character of one of our most picturesque suburbs; on the other, images destruction that are sadly very familiar to us at this time. But even more evocative and powerful were the lines projected last year onto the cityscape of Bristol by Chris Bodle in his Watermarks Project.

Bodle’s idea was to concretely visualise the high tide and flood watermarks that are predicted to occur should the entire Greenland ice shelf melt, based on the artist’s key insight that

[t]he future of our cities and landscapes and our responses to rising sea levels should not just left to scientists, politicians, engineers and the built environment professions, but emerge from as wide a base as possible with participation and involvement from all sections of the wider community. Ultimately the mitigation and adaptation measures will be social and cultural as much as scientific and technical.

I don’t think it’s an undue leap to suggest that such concrete imaginings would be of great value also to visualise alternative futures that don’t hinge on the likely effects of natural catastrophes. Imagine if we could model in such a direct and accessible way the outline of downtown Christchurch ten years from now, and the kind of public meetings and discussions that could be had on the very sites that are due to be restored, rebuilt or transformed. The internet, for its part, needs more virtual spaces where cities can be manipulated and played with in this way, tools like HyperCities, except designed to allow for the future to be crowdsourced, and not just the past.