Monday, April 25, 2011

There have been a number of reviews of Jaron Lanier’s You Are Not a Gadget, but none that I have come across has included a bibliographic note specifying or describing the physical aspect of the reviewed copy. So I’m going to buck the trend and declare outright that I’m about to review the 2010 Allen Lane paperback edition of the book, which differs from others released in other countries and looks like this.

You can probably see already why I’ve made this distinction. I’m going to further allow you to visualise this object noting the power, headphones and USB ports on its spine,

as well as the fact that the ISBN and barcode are printed on a product badge such as those commonly found on consumer electronics products.

Finally, turning back to the front cover, I will note that the book appears to be switched on.

You can imagine it sitting on a bookshelf after you’re done with it, quietly humming away, its battery slowly depleting. Of course we have encountered buzzing books made of paper before. This is nothing new. But the idea implicit in the design is that the contents of the book might some day become exhausted, and cease to have human meaning, reverting to recyclable inert matter or mute space-filler. Which stands to reason, if you think about the speed at which (some) books about the internet become obsolete. But surely Lanier hopes for his invective to last longer than that – at least as long as one of Neil Postman’s would be my guess. So why did he approve this preposterous design? I’m going to have to come back to this later in the review.

I should also state outright that I wanted this book to be better, but I’ll take what I can get because I am strongly convinced that we need more books like this: popular, accessible critiques of digital ideology, written engagingly by somebody who knows his way around the technology and the culture. And while You Are Not a Gadget is weak at times – notably in its apparent inability to conceive of the political – it also contains a number of valuable and challenging statements, roughly defined as ‘the ones that I agree with’.

You Are Not a Gadget’s main contention is that web 2.0 is undoing the progress of the first stage of the World Wide Web, reducing personhood and expression by imposing an ever narrowing range of social and knowledge templates and the corporatist logic of companies like Facebook and Google. These developments, Lanier tells us, are consistent with the ‘cybernetic totalism’ espoused by many key players in Silicon Valley, according to which the internet (or, if you prefer, the hive mind that is coextensive with it in Lanier’s analysis) is about to achieve autonomous sentience, if it hasn’t already, and at any rate it is an entity greater, wiser and no less worthy of rights than the sum of its parts – which is to say, us.

By way of example of how self, meaning and expression are reduced as a result of this totalism and the deployment of its preferred technologies and designs, Lanier cites amongst other things the process of identity construction via a series of drop-down menus involved in creating a Facebook account, the ‘lock-in’ of MIDI as the standard for encoding music digitally– in spite of its inability to adequately represent the world of sound in the gaps between its discrete steps – and, most importantly of all, the process whereby experience is transformed (alienated) into information, and yet that information is regarded as functionally equivalent to experience. You might recall me making a similar argument on this blog from time to time: namely, that by eliding the difference between computer memory and the particular subset of knowledge that is made up of personal, collective and historical memory, digital ideology is claiming guardianship over one of the most fundamental aspects of social communication; and furthermore, insofar as memory plays a significant role in defining personhood, that this move creates a pattern of exclusion whereby only the people whose recorded experiences are compatible with the principal digital formats get to remember stuff and be remembered themselves.

It is in order to highlight those particular fissures that I wrote firstly my dissertation and secondly this blog, so obviously Lanier gets no quarrel from me on those particular points. But some of you may also recall my advocating from time to time a gentle resistance to certain biases of the medium – such as the ease of posting in blogger platforms which encourages instant commenting, or the pressure to speak quickly and often on Twitter – and here too I was pleased to see the author advise similar strategies. To wit, from a longer list on page 21:

- Post a video once in a while that took you one hundred times more time to create than it takes to view.
- Write a blog post that took weeks of reflection before you heard the inner voice that needed to come out.
- If you are twittering, innovate in order to find a way to describe your internal state instead of trivial external events, to avoid the creeping danger of believing that objectively described events define you, as they would define a machine.

These hints may seem a little facile, if not downright paternalistic – I believe I can in fact almost physically hear some of you cringing – but I think that they have some validity if for no other reason that I have personally found them of use. Not being much of a coder, and a relatively late adopter to boot, I have benefited significantly from the great immediacy and usability of web 2.0 tools, including, yes, Blogger, Twitter, Facebook and Wikipedia, each of which undeniably has its uses, but I’ve also been wary as many people are of how collectively they construct us as subjects. Pushing back, or as Lanier puts it ‘resist[ing] the easy grooves they guide you into’ (22), involves developing valuable forms of discipline and has the added benefit of denaturalising the medium, rather than continuing to operate under the very tempting assumption that its way of doing things is the only way of doing things. That is what Lanier means by lock-in, of which the file as a unit of meaningful information is perhaps the most apt example: there could be – or perhaps could have been, at this late stage – operating systems not based on the manipulation of files, and the difference could well affect substantively the way we conceptualise information itself. But the file is simply all there is, and it’s hard to even imagine an alternative.

The targets of Lanier’s critique that are likely to cause the most surprise amongst people not familiar with his work as a columnist are Wikipedia, the open culture/Creative Commons world, the Linux community and the vast majority of peer-to-peer filesharing, all of which are variously charged with advocating the erasure of context, authorship and point of view, thus pureeing us into the hive mind. The case of Wikipedia, for which Lanier evokes the idea of the Oracle Illusion, ‘in which knowledge of the human authorship of a text is suppressed in order to give the text superhuman validity’ (32), is a fine example of how a useful tool becomes all that there is simply by virtue of its extraordinary convenience. And the section on file-sharing and the business models of journalism and the creative industries, while punctuated with some questionable statements, nonetheless makes at least one challenging point:

If you want to know what's really going on in a society or ideology, follow the money. If money is flowing to advertising instead of musicians, journalists, and artists, then a society is more concerned with manipulation than truth or beauty. […] The combination of hive mind and advertising has resulted in a new kind of social contract. The basic idea of this contract is that authors, journalists, musicians, and artists are encouraged to treat the fruits of their intellects and imaginations as fragments to be given without pay to the hive mind. Reciprocity takes the form of self-promotion. Culture is to become precisely nothing but advertising. [83]

I tried to tease out some aspects of this last week with regard to the experience of seeing a particular post from this blog liked, linked and shared by others, and suggested that these actions can also be a form of appropriation, compensated with the universal currency of reputation-enhancing promotion. Rather more bleakly, Lanier claims in the preface that the reactions to his book ‘will repeatedly degenerate into mindless chains of anonymous insults and inarticulate controversies’ (ix), implying that his argument will be not only appropriated but in fact picked apart and perverted, and nothing good will come of the encounter of the book with the hive mind.

This however is quite problematic. For all of his readiness to dismiss internet forums as a viable vehicle of debate and the exchange of ideas, Lanier doesn’t argue convincingly or in fact at all that conversations in other media and contexts are any more productive, which situates the book in a perplexing rhetorical vacuum. Who exactly is Lanier trying to persuade, and how?

Failure to reflect not only on print as a medium, but on the recourse to print for this particular critical work at this particular time, weakens Lanier’s position. After all, one could contend that books are also carriers of information that is alienated experience, and that most of them are just as impoverished from a design point of view and suffering from at least as many lock-ins – such as (for us) the Latin script, or the length under which a book fails to qualify as a proper book – as your average template-based blog or social web platform. In fact, as I noted above, this particular book is available in many different editions with a number of different covers and graphic presentations, but reviewers don’t bother to mention this because they regard the form as being immaterial to the content. I happen to think that they are wrong. In replicating the appearance of an early Sony eBook reader, the Australian paperback edition of You Are Not a Gadget undercuts in fact Lanier’s critique. There is nothing ironic about the navigation buttons and the communication ports; on the contrary, it is precisely the kind of mashup of forms that the author finds so lamentable and ubiquitous a feature of web 2.0. Even more unfortunate was the decision to blanket the third and fifth page in the book, before Lanier gets down to business, with a sequence of binary digits. Not just because of how unimaginative it is, or how pointedly (albeit, I must assume, unwittingly) it underscores the author’s preoccupation – expressed in the preface – that his words will be mostly read by robots and algorithms; but primarily because of the echoes with the spine of Nicholas Negroponte’s Being Digital.

I cannot think of a single book as profoundly antithetical to Lanier’s philosophy as this one, which is all about embracing our future as creatures of bits and insisting that we are, in fact, gadgets, or a thinker as unsympathetic to his cause as Nicholas Negroponte, who thinks that possessing a laptop with which to connect to the hive mind is the key to the emancipation of the world’s poor. Yet the fact that the two books are so similar in appearance cannot be written off as a coincidence, but points rather to the aspects of our old media that we have become blind to, and how this blindness prevents us from fully comprehending and mobilising their capacity for persuasion.

And so what disappoints about You Are Not a Gadget is its insufficient appreciation of the texture not just of the media themselves, but of our being social. Dismissing the web’s prodigious capacity to share and disseminate texts – which is to say, value them – as the reciprocity of self-promotion, is ultimately as reductive and deterministic as the narrow conceptual frameworks that Lanier exposes in his book. And the same goes for creativity. As this recent example from Whitney Trettien’s excellent blog illustrates, historians of the book occasionally delight in reminding us of the strangeness that was lost as the medium became more and more standardised in the name of cost-effectiveness and efficiency – we should strive to recapture it, and not just on our computers.

Tuesday, April 19, 2011

What would I do if the doctor gave me six months to live? I'd type faster.

(Isaac Asimov)

Some of you – possibly as many as none – might have noticed that I missed my regular posting deadline last week. This was occasioned by personal circumstances that I’m not going to discuss here quite yet, although my partner and I felt comfortable with sharing them on another forum. At any rate, my failure to update the blog didn’t prevent it from attracting its largest weekly reader numbers ever, and by quite some margin. So I’m going to start from that this week, and what it says about where and to whom these writings belong.

***

With the wisdom imparted by nearly three years of blogging, I have largely given up trying to anticipate what will prove popular and what won’t, as I am more frequently wrong than I am right. But even so the success of the post in question – on Richard Scarry’s book What Do People Do All Day? – has really surprised me: especially since it expanded on a recurring topic, I vaguely anticipated that it might interest some of my regular readers and be over in the course of its allotted week. Not so. On week one, it did the rounds mainly on Twitter: concentric ripples that you could almost trace in graphical form, followed by more intricate lines zig-zagging across the Atlantic and occasionally leaping back to this part of the world. Then came James Bridle's booktwo. Then Bobulate and, a week or so later, Kottke and the Ideas section of TheBoston Globe. I confess I had never heard of the middle two, but to put them in perspective, Bobulate has two and a half times as many subscribers of the most widely subscribed New Zealand blog [1]; Kottke, nearly fifty times as many. That would be over 57,000 people through Google Reader alone.

I couldn’t begin to credibly extrapolate how many actual regular readers that translates into, except I have a very precise idea of what happens when the blog owner is kind enough to point to something you’ve written. More ripples, but vastly amplified this time, via dozens of tweets and retweets and hundreds of Facebook ‘shares’. Yet nothing that leaves a trace, unlike in the old media: no print runs that get sold out, no physical displacement of resources or tangible, material effect on the physical world. Since the traffic produced very few comments, were it not for the fact that I keep site statistics I wouldn’t even know about it. But always these events affect the writing space, changing the makeup of its readership and therefore the sense of what belongs in it.

In my case, it used to be that the majority of this blog’s regulars came from New Zealand. Now it’s the US first, followed by Britain. I have no control over this, although I suppose that if I were perverse enough I could attempt to put off those overseas readers by writing more local content unlikely to be understood without the appropriate keys. But then I am not from here either, so I would likely alienate New Zealanders even more with my clumsy overreaching. Conversely, I have more readers from France and Germany than my native Italy, a situation I would dearly love to reverse – if only I knew how. Locally accented content alone doesn’t do it, although I’d probably do well to ask myself why I never blog in Italian and see where that takes me. But so long as one persists in writing in the lingua franca, there will always be a pull towards the old imperial centres, where the networks are denser and the blogs and Twitter feeds have a following that easily dwarfs the most popular of our own.

And so it strikes me that the liking and the sharing are also a form of appropriation; that what is being selected by a handful of Great Aggregators or Super Readers are the posts most likely to appeal to the largest and most seamlessly connected cultural networks; and that when it happens to be a fragment of the arguments I am trying to develop on these pages that gets promoted via this highly sophisticated process of selection, by then it’s too late, not to mention churlish, for me to worry about how it could be perceived in isolation. It was mine, briefly, but now it’s gone. Voiceless, anonymised, anthologised along with the many other thousands of fragments that are liked or shared every minute of every day, it will mean what the great text that is the internet in its Anglo-American inflection allows it to mean.

***

Or so goes the argument that I’m more or less conditioned to make. But the reverse is also true, if not in fact truer: that even to the extent that liking and sharing are a form of appropriation, the author of a blog maintains a high degree of control over his or her writing space; and that in the act of returning, a fraction of the new readers accept to enter into a conversation on topics that don’t lend themselves so readily to global or even cross-cultural consumption, such as the history of social welfare in New Zealand. Furthermore – and this to my mind is a key point – those connections run both ways. And so it’s not just that we can read Iranian bloggers, but also that they can read us, and that we all write each other. This worldwide para-space occupied by people who more and more frequently are at the same time readers and writers is a genuinely new thing, and sometimes the only sane response to its staggering capacity to promote and disseminate texts is simply to marvel at it, as we do when we come across somebody who sits comfortably at the crossroads of culture.

I will return to some of these ideas when I deal with Jaron Lanier’s critique of the social web, but if I may I’d like to interrogate my own place in all of this today, at the cost of indulging in one of those ghastly ‘why I blog’ posts roundly and rightly mocked some time ago by Paul Litterick. Because I don’t mind telling you, in spite of some of the foregoing, that I’m most of all pleased whenever somebody bothers to share something I’ve written. And it’s not just that I have an ego or that I want to be liked, nor that I still find this gig so utterly unlikely, my being allowed to write every week to an actual audience, that ultimately I need to be reassured that it’s okay because look, somebody actually reads the stuff. It’s also that all of these acts map networks of shared interest that in themselves give meaning to the writing, in a process that is rendered more valuable by its being visible and concrete. (And the names on blog rolls operate in a similar way, making visible a more permanent set of connections.)

Without these, without the possibility of making and reading into such connections, I couldn’t really conceive of keeping a blog, and without blogging I’m not sure I could have pursued writing, in this or any other medium. Owen Hatherley alluded to this in one of the most generous write ups ever received by this blog, when he suggested that a certain kind of columnist has been ‘pushed online by the vagaries of wordcounts and networks both technological and institutional’. This is probably true, but the flip side is that I doubt I could in fact have become that columnist in print – not without the apprenticeship.

And so we come to the part that suspiciously resembles a ‘why I blog’ statement. How the hell did I get myself into this? Well, it’s too late now.

For some reason I think quite often about this one sentence from a book by Guido Almansi called La ragion comica, The Comic Reason: Rido soprattutto perché non voglio morire. 'I laugh mostly because I don’t want to die.' I don’t have the book to hand and have since forgotten whether it was a quotation from somebody else, or what Almansi’s general argument even was at that point. It just sits there. Then I go back to a line that I never developed, about how writing doesn’t take time, it makes time. I have come to believe that this is probably true. And the other part, too: that writing can be both a strategy for deferral and a source of pleasure. Joy, even.

I write mostly because I don’t want to die. I’m not even sure what that means, except that I thought of that line while I was preoccupied with something far more important than writing, and while this other thing that I wrote briefly got a life of its own.

[1] A note on the method is probably in order here. There is one piece of public information that can give a rough idea of the relative regular readership of two or more blogs, and it's the number of Google Reader subscribers. It's a far from perfect piece of data - it tends for instance to penalise blogs that don't syndicate the full feed or push particular alternative forms of subscriptions, such as mailing lists or livejournal or blogger following. Public Address for instance is certainly underrepresented by this statistic, which is why when I say 'the most widely subscribed New Zealand blog' (ie Kiwiblog), you shouldn't quote me on it. But that's likely the order of magnitude anyhow.

Monday, April 4, 2011

He was wet and muddy and hungry and cold, and the day was raw with a high wind that hurt his eyes. But the aliens were trying to infiltrate and every sentry post was vital… And then he saw one of them crawling toward him. He drew a bead and fired. The alien made that strange horrible sound they all make, then lay still.

He shuddered at the sound and sight of the alien lying there. One ought to be able to get used to them after a while, but he’d never been able to. Such repulsive creatures they were, with only two arms and two legs, ghastly white skins and no scales.

(Frederic Brown, ‘Sentry’)

When a force of marauding aliens invades the earth and starts killing everybody, the only sensible response is a military one. You do not negotiate with terror itself. And so every enlisted soldier, every reservist, every citizen with a weapon becomes part of the resistance. The enemy is ruthless, its weaponry deadly, its advance seemingly unstoppable. Frontal confrontation soon proves disastrous, and so the resistance has to adopt guerrilla tactics: ambushes, improvised explosive devices, suicide attacks. Anything to disrupt the invaders.

Jonathan Liebesman’s World Invasion: Battle Los Angeles is set in an irony-free zone, demonstrating not an inkling of how the never-say-die, self-sacrificing ethos of its heroes might resemble the mystique of insurgent warriors elsewhere, and how much the super-armoured aliens dropping out of the sky might in turn reflect how the Western military is perceived in the other world that is Asia, or Africa, or the Middle East. ‘Here come the Americans / Garibaldian martians’ intoned a song by the Italian band Stormy Six on the liberators who fought the Last Just War, and they really must have seemed an alien race: nobler, stronger, futuristically equipped (those shiny chocolate bars!). But six decades later, still we grapple pathetically with that fundamental problem of perspective: how to represent the Western invader as an Other, how to comprehend that its motivations may appear completely hostile and opaque to the invaded – and not just to the extent that they actually are, but supercharged into the truly demonic: a Great Satan, indeed.

Hollywood’s crudest fantasies of aliens coming for our blood (War of the Worlds) or our water (Battle Los Angeles) highlight the extent of this failure, as do the films which purport to assume the point of view of the colonised only to construct a disconcertingly impoverished and self-serving clash-of-cultures narrative (Avatar). In between, the enlightened liberal view of products like Generation Kill, the HBO series by David Simon, Ed Burns and Evan Wright based on Wright’s experience as a reporter embedded with the first Marines reconnaissance battalion‎ during the Second Gulf War. This is a far more nuanced treatment of contemporary colonial warfare, aware of its absurdities and its atrocities, but also of the implications of embeddedness, that is to say, of siding literally with the battalion, as if war was a first person shooter and we – as players, spectators, reporters and citizens – had no choice of which side to take.

But the show’s critique, if you could even call it that, only goes so far, and in the final scene of its final episode, when the Marines assemble to watch the video of the invasion shot by one of the men and choose to leave the room one by one, responding to the manipulation of the spectacle by withdrawing from it, we remain bound to it, and manipulated in turn by the non-incidental use of Johnny Cash’s When the Man Comes Around. Ultimately – and more so than The Wire, which offered the point of view of the gangsters, the project-dwellers and the occasional citizen alongside, albeit secondary to, that of the police – Generation Kill is content with shooting the Iraqis as roadside extras or more frequently victims, while the army, that is to say the film crew, advances onto Baghdad, and us with them: also embedded, also complicit, also forced into a role.

Battle Los Angeles dispenses with any such semblance of self-reflection, even as it appears dimly aware of the possibility that its fantasy might offer a commentary on world events. Thus a talking head on a CNN show in the early hours of the invasion, when it emerges that what the aliens are attempting is an aggressive water privatisation scheme, puts forward the following analysis:

When you invade a place for its natural resources, rule of colonisation states that you wipe out the indigenous population. Right now, we are being colonised.

One could expand on the validating role of branded fake bulletins in these films, but whichever way you look at it, the pronouncement is highly egregious, for this is not in fact how colonisation has worked on this planet for some time. There is no rulebook that says you can wipe out an indigenous population in order to plunder its resources. To suggest that there is implies that the wars in Iraq, Afghanistan and most recently Libya are not colonial, nor aimed at securing the supply of oil or exerting strategic influence. Civilian casualties in these conflicts are considered in fact by the citizens of the countries that send their military as acceptable collateral damage, which is certainly scandalous enough – but not as the actual means to the ultimate end.

What makes Battle Los Angeles even more egregious, however, is its blithe existence as an entertainment and consumer product while all those other wars are being fought. On this point I must go to the always excellent Aaron Bady and his recent reflections on simultaneity and indifference. The essay opens with a reminder, or possibly a piece of information: did you know that 40 civilians died in Pakistan as a result of a botched drone attack last month? I confess that I did not. The attack, in that Orwellian non-place that is the region known as AfPak, got very little coverage in our media, and elicited a rather muted outcry. Bady expands on this, and on the difficulties in maintaining a perspective and a sense of the unfolding of simultaneous events, each with their own repercussions in actual communities and societies.

To grasp simultaneous events is a challenge: attention is finite and we can only care about so much. To do so in a media environment that is so often blind to correlations and equivalences, of in fact insists that connections not be made, makes the challenge harder still. But Bady goes a step further, and contends that the elision of perspectives necessary for what Rohit Chopra has called ‘imperial indifference’ is not an act of inadvertent omission, or of reflexive or parochial cultural laziness, but has in fact to be actively produced. Writes Chopra, and Bady quotes:

[I]mperial indifference is the result of an immense intellectual, political, cultural and social labour undertaken in diverse locations of social life and practice – from the content of school and college textbooks to the representation of ethnic minorities on television shows in India or the US, traversing the multiple tracks and channels of soft diplomacy and the realpolitik calculations of hawks, enshrined in the gendered and raced division of global labour and no less in the political economy of global information technology, communication channels and telecommunication networks. Imperial indifference is made possible by the relentless inscription of the lessness of some lives and bodies; when some lives, as Judith Butler suggests, are less grievable than others […]. In various forms of social existence, in the banal stuff of everyday life as in the obviously “imperial” acts of powerful states, imperial difference enables as much as it reﬂects the normalisation of empire in the present historical moment.

Chalk up Battle Los Angeles to the normalisation of empire, then. File its cliché-laden elegy for the Marine Corps under the rubric of a propaganda that isn’t innocuous, casual of vacuous, but on the contrary is a tool of indifference, a thing that numbs and blinds us. And regard how sophisticated its language is, how adept the filmmakers are at this game. The mise-en-scene, favouring faux-documentary handheld action over more classic mounted camera set ups, puts you right there, on the scene, one of the guys. The balance of ethnicities, genders and temperaments makes of the Marines unit a microcosm and at the same time a composite model society based on sacrifice, solidarity, resilience and deference to the chain of command. As the rest of society is victimised and helpless, the implication is that this microcosm could take its place. And so, as in 2012, the destruction of Los Angeles, of the great city with all its contradictions and its messy complexities, is a cleansing act that prepares us for a new beginning.

But observe also what is less obvious: beginning with how fortunate you should count yourself that these warriors are on your side. These people are an unstoppable force. They never quit – every challenge is answered by the chant ‘Retreat? Hell!’ – there is no wound that will slow them down and they never die except by being blown up to bits. They are meticulously sadistic, although strictly at the service of good: thus when they capture a moribund alien, the character played by Aaron Eckhart proceeds to peel off its flesh and stab its internal organs one by one in search for the most vital. Seen through the alien sentry’s eyes in Frederic Brown’s famous story, they might well look like monsters, or through the eyes of a thirsty colossus from outer space, like cockroaches. But they are our monsters, our cockroaches. They are us.

Which leads right back to the question of point of view. The film, as I noted, is practically a first person shooter. A videogame version has been announced, and it promises to be indistinguishable from it, while another first person shooter – Call of Duty - Black Ops – recently broke all records by reaching $1 billion in sales in 42 days. Now don’t worry, I am not about to claim that videogames are corrupting our youth by desensitising them or making them fond of violence. Merely observing how seamless the entertainment machine has become, how different media – television, cinema, online gaming – bleed into one another, and how together they organise our very selective understanding of real world events, naturalising such concept as pay per view, billable time, permanent war. You play, you learn about stuff, you point and shoot. There is no sinister, conspiratorial intent that binds this semiotic system to transnational capitalism and economic and military imperialism: it’s just that they are cut from the same cloth, or rather, that they operate on the same global informational networks and aspire to the same transparent realism.

And this is possibly the most perverse aspect of Battle Los Angeles: its paradoxical claim to authenticity. The three weeks military-style training undergone by the actors, the explicit reference to the vocabulary of Black Hawk Down and Saving Private Ryan, the involvement of actual military personnel and facilities, are all put at the service of an infantile fantasy at the same time as actual wars are being fought but cannot be apprehended, sometimes not even in the form of headlines, because they are stories that no longer sell, or do not mesh. And so the grotesquely named but nonetheless actual place that is AfPak becomes less than Liebesman’s Los Angeles: less important to learn about, less salient, less real.

Spoiler alert: the turning point in the film is when the hero works out how to disable the enemy’s drones.