Archive for the ‘Law’ Category

Speaking at a 2012 literary festival, Jonathan Franzen expertly flattered his audience, sweeping them, himself and the US president into gratifying communion:

One of the reasons I love Barack Obama as much as I do is that we finally have a real reader in the White House. It’s absolutely amazing. There’s one of us running the US.

A ‘real writer type’, too: the young Obama, his early promise detected, was offered, and duly inked, a publishing contract to write his memoirs while still at college.

Released just before an electoral campaign for the Illinois Senate, that book presented the candidate in his now accustomed role: embodiment of triumph over racial prejudice, personification of national healing.

The breadth of presidential interests is, of course, not exhausted by the written word. Its scope encompasses all varieties of Blue State cultural output, visual as well as verbal.

The contours of this aesthetic ecumenicism — a broad-minded taste for Hollywood dross as well as Champaign-Urbana middlebrow — adhere closely to the map of industries granted favourable copyright, patent and intellectual-property protection — now of unprecedented extent and duration — during recent decades.

The Motion Picture Association and the Association of American Publishers both have a friend, attuned to their needs and sensibilities, in the White House.

The cultural pretensions of Democratic presidents, along with their financial contributors and electoral base, have accordingly changed since 1946, when Harry Truman could rail against ‘the “Artists” with a capital A, the parlour pinks and the soprano-voiced men.’

How, examined in the longue durée, has production and reproduction of books and the written word altered the social position of authors? How have the writer’s esteem, prerogatives and benefices altered with his or her workaday techniques, tools of the trade, property rights and proximity to power?

The topic is vast, but some remarks can be made.

To organize any society’s division of labour, a ruling class always depends on technologies of information transmission and storage (e.g. written culture, number systems, monetary tokens, aides memoire).

Herodotus explained how geometry arose from the Egyptian state’s need to survey and measure land boundaries for apportionment to tenants:

Egypt was cut up; and they said that this king distributed the land to all the Egyptians, giving an equal square portion to each man, and from this he made his revenue, having appointed them to pay a certain rent every year: and if the river should take away anything from any man’s portion, he would come to the king and declare that which had happened, and the king used to send men to examine and to find out by measurement how much less the piece of land had become, in order that for the future the man might pay less, in proportion to the rent appointed: and I think that thus the art of geometry was found out and afterwards came into Hellas also. For as touching the sun-dial and the gnomon and the twelve divisions of the day, they were learnt by the Hellenes from the Babylonians.

Literate societies, which allow information to be more readily stored externally and transmitted horizontally (e.g. by telegraph) as well as vertically across generations (e.g. training manuals), can deploy a more complex labour process than non-literate ones.

Through the movement of symbols — coins, written messages, titles to deed — separate production units can be coordinated.

Or large-scale collaborative projects, such as architectural or construction works, can be undertaken, with many producers working in parallel under the same roof.

Thanks to writing and other methods of storing information, technological specialties can accrete and be taught to new generations, and society’s labour resources allocated to different concrete tasks.

The ‘disembodied word,’ wrote Ernest Gellner, ‘can be identically present in many, many places.’

The scale of productive labour commanded, and thus the capacity to extract and appropriate a surplus product (e.g. tax-raising or rent), is thereby increased by a system of extendible records such as writing.

The sovereign rulers or elite of such a territory are able to mobilize greater resources (military service, armaments, requisitioned food, etc.) to squander on war or the threat of war, or to administer in peacetime.

Thus the rulers of a literate society will be more likely to succeed in military conflict with external rivals and internal challengers.

Suppose this rudimentary level of literacy reached, as in agrarian societies.

How then has the manner in which manuscripts were copied and books printed influenced matters?

Stung by the humiliations inflicted upon the Merovingians by the tax-raising Umayyad state, the Carolingian court in Aachen — its own fiscal resources modest — opted to undertake an ambitious administrative and education policy.

Late in the eighth century Charlemagne addressed a famous letter to the abbot Baugaulf of Fulda, instructing him to forward copies to every monastery in Francia:

[The] bishoprics and monasteries entrusted by the favour of Christ to our control, in addition to inculcating the culture of letters, also ought to be zealous in teaching those who by the gift of God are able to learn, according to the capacity of each individual, so that just as the observation of the rule imparts order and grace to honesty of morals, so also zeal in teaching and learning may do the same for sentences, so that those who desire to please God by living rightly should not neglect to please him also by speaking correctly…

For although correct conduct may be better than knowledge, nevertheless knowledge precedes conduct.

Therefore, each one ought to study what he desires to accomplish, so that so much the more fully the mind may know what ought to be done, as the tongue hastens in the praises of omnipotent God without the hindrances of errors. For since errors should be shunned by all men, so much the more ought they to be avoided as far as possible by those who are chosen for this very purpose alone, so that they ought to be the especial servants of truth.

For when in the years just passed letters were often written to us from several monasteries in which it was stated that the brethren who dwelt there offered up in our behalf sacred and pious prayers, we have recognized in most of these letters both correct thoughts and uncouth expressions; because what pious devotion dictated faithfully to the mind, the tongue, uneducated on account of the neglect of study, was not able to express in the letter without error…

Therefore, we exhort you not only not to neglect the study of letters, but also with most humble mind, pleasing to God, to study earnestly in order that you may be able more easily and more correctly to penetrate the mysteries of the divine Scriptures.

Since, moreover, images, tropes and similar figures are found in the sacred pages, no one doubts that each one in reading these will understand the spiritual sense more quickly if previously he shall have been fully instructed in the mastery of letters…

Einhard’s Life of Charlemagnedescribes how the king himself, though barely able to write, joined in the Frankish elite’s recovery of Latin classics and early Christian authorities:

The plan that he adopted for his children’s education was, first of all, to have both boys and girls instructed in the liberal arts, to which he also turned his own attention…

Charles had the gift of ready and fluent speech, and could express whatever he had to say with the utmost clearness. He was not satisfied with command of his native language merely, but gave attention to the study of foreign ones, and in particular was such a master of Latin that he could speak it as well as his native tongue; but he could understand Greek better than he could speak it. He was so eloquent, indeed, that he might have passed for a teacher of eloquence.

He most zealously cultivated the liberal arts, held those who taught them in great esteem, and conferred great honors upon them.

He took lessons in grammar of the deacon Peter of Pisa, at that time an aged man. Another deacon, Albin of Britain, surnamed Alcuin, a man of Saxon extraction, who was the greatest scholar of the day, was his teacher in other branches of learning.

The King spent much time and labour with him studying rhetoric, dialectics, and especially astronomy; he learned to reckon, and used to investigate the motions of the heavenly bodies most curiously, with an intelligent scrutiny.

He also tried to write, and used to keep tablets and blanks in bed under his pillow, that at leisure hours he might accustom his hand to form the letters; however, as he did not begin his efforts in due season, but late in life, they met with ill success.

There he would salvage and transcribe lost manuscripts, with copying accuracy improved by development of the standardized script known as Carolingian miniscule.

Alcuin would also establish and amass a library of books (Virgil, Augustine, Jerome, etc.), administer abbeys, and teach ‘liberal studies and the holy word’ to the Frankish aristocracy, court officials and clergy.

A common elite culture was thereby transmitted at the Palace School, instructions issued in a language and Church ideology that all ecclesiastic authorities could understand and apply.

Through the serial copying of texts by scribes and notaries, and the teaching of students, this ‘culture of letters’ gradually diffused outward throughout the cathedral schools of the Frankish realm.

Common institutions (incorporated towns, monastery and cathedral schools, Catholic orders) spread from the Rhine-Meuse heartland of the Carolingian lands across Europe.

Latin Christendom’s conquest to the south, in Acquitane, northern Spain and Italy, and to the east in Saxony and the Slavic lands, created social and legal replicas rather than dependencies.

As God’s kingly law rules in absolute majesty over the wide world
It is an exceedingly holy task to copy the law of God.
This activity is a pious one, unequalled in merit
By any other which men’s hands can perform.
For the fingers rejoice in writing, the eyes in seeing,
And the mind at examining the meaning of God’s mystical words.
No work sees the light which hoary old age
Does not destroy or wicked time overturn:
Only letters are immortal and ward off death
Only letters in books bring the past to life.
Indeed God’s hand carved letters on the rock
That pleased him when he gave his laws to the people,
And these letters reveal everything in the world that is
Has been, or may chance to come in the future.

An ingratiating manner was thus adopted towards the specialist corps of scholars, writers and clerics. Political authority, while chiefly engaged in the sordid business of territorial aggrandizement, relied for its perpetuation and its sense of mission upon scriptural authority, and its codification in writing.

The word was repository of wisdom and legitimating truth. Its custodians should be indulged.

This medieval revival of Roman jurisprudence, making available classical precepts of ownership and contract, was propitious for the growth of West European commodity production, trade and urbanization.

In the more coherently developed Byzantine Empire, centuries earlier, revival of the Justinian Code by Basil I had been accompanied by renewed appreciation for Virgil, Homer and Augustine. The Macedonian Renaissance, with Photius and his famous library, presented a pinnacle then unreachable in backwards Francia. Byzantine state officials were trained in Graeco-Roman classics: Leo the Mathematician taught Aristotelian logic at the Magnaura school.

In the West, however, until the Renaissance the Church served as a ‘special vessel’ that preserved the cultural heritage of classical antiquity, ‘escaping the general wreckage to transmit the mysterious messages of the past to the less advanced future… the indispensable bridge between two epochs.’

In our own day, the practice of copying information has become more important to social production.

Of course, as with much else, the economic contribution made by copying information was identified long ago by Charles Babbage.

Replacement of the scribe (a serial process of copying) by the printing press and moveable type brought rapid increase in the productivity of information copying:

Printing from moveable types… is the most important in its influence of all the arts of copying.

It possesses a singular peculiarity, in the immense subdivision of the parts that form the pattern. After that pattern has furnished thousands of copies, the same individual elements may be arranged again and again in other forms, and thus supply multitudes of originals, from each of which thousands of their copied impressions may flow.

This set the scene for generalized literacy among the educated workforce required by industrial capitalism. And it ensured, for a time, the supremacy of verbal culture.

Outside the printing industry itself, mass production using interchangeable parts has, since the mid-19th century, depended on replication of standardized products made to precise tolerances. (This, in turn, makes possible the development of numerical-control machine tools, replacing jigs and fixtures.)

Today’s books, images, recorded music and software are transmitted rapidly and in parallel using Unicode and ASCII.

Information (e.g. a sequence of words) is liberated from its dependence on any particular medium or embodiment in a specific material artifact (e.g. typeset document). Written text may be duplicated at will.

Especially since the 1970s, copyright law has decreed that employees, or those contractors working for hire, waive ownership rights over their creative work to the commissioning or employing entity (publisher, studio, ad agency).

Staff journalists or advertising writers, for example, have no property claims in their published works, which belong instead to the periodical or agency that employs or contracts them (some exceptions apply).

Freelance writers, too, while nominally independent contractors and thus entitled to copyright, are in bargaining terms at the mercy of publishers: ‘if [writers] do not capitulate and assign rights to such conglomerates they risk being blacklisted.’

This divestment of authorship has accomplished a sharp change in the social position of writers, who had hitherto, in some measure, been independent producers: owning their own tools of the trade, working under their own direction rather than that of supervisors, preserving rights to their output and whatever fruits it might yield.

Unfortunately, as if in a company-man dystopia, he has been subsumed into the identity of his corporate employer. His disappearance is by now almost complete. Although he has gone on writing, the corporation has become the author of his oeuvre…

[Modern] creativity is exercised in an employment setting where salaried creators sign away their rights in their work as a condition of hire — sign away, in effect, their very status as authors.

In this ‘corporatization of creativity’, there is an echo of the fate of the salaried engineer, brought into a collective work team by growth of the patent system.

The frustration of independent invention led the majority of inventors into the research laboratories of the large corporations; in the process, invention itself was transformed…

Inventors became employees in corporations to spare themselves the hardship of going in alone. Their patents were thereby handled by corporation-paid patent lawyers and their inventions were made commercially viable at corporate expense. Corporate employment thus eliminated the problem of lawsuits, and in addition provided well-equipped laboratories, libraries and technical assistance for research. The nature of their actual work, however, had changed…

By employing the technical experts capable of producing inventions, the corporations were also obtaining the legally necessary vehicles for the accumulation of corporate patents…

In time… employees became required to assign all patent rights to their employer, as part of their employment contracts, in return for their salaries.

The writer’s reduced circumstances in the world have been accompanied by a marked decline in the quality of authorial output.

Little published in the decades following the Second World War stands comparison with the tightly bunched sequence of totems released after the First: works by Proust, Joyce, Mann, Kafka, Musil, Rilke, Valéry, Mayakovsky all appearing within a few years of each other.

The great modernist seers, not least in their own self-mythology, were independent producers, retaining an artisanal autonomy of routine, if not hieratic ritual. Pen and paper offered a self-sufficient cloister from the industrial economy of plastics, electronics and chemical factories.

These droits de l’auteur were usurped as their literary successors, obliged to do paid journalism or media work in whatever measure, have been drawn into capitalist social relations:

[There] is a deeper reason for the disappearance of the Great Writer under postmodernism, and it is simply this, sometimes called “uneven development”: in an age of monopolies (and trade unions), of increasing institutionalized collectivization, there is always a lag. Some parts of the economy are still archaic, handicraft enclaves; some are more modern and futuristic than the future itself.

Modern art, in this respect, drew its power and its possibilities from being a backwater and an archaic holdover within a modernizing economy: it glorified, celebrated, and dramatized older forms of individual production which the new mode of production was elsewhere on the point of displacing and blotting out.

Aesthetic production then offered the Utopian vision of a more human production generally; and in the world of the monopoly stage of capitalism it exercised a fascination by way of the image it offered of a Utopian transformation of human life.

Joyce in his rooms in Paris singlehandedly produces a whole world, all by himself and beholden to no one; but the human beings in the streets outside those rooms have no comparable sense of power and control, of human productivity; none of the feeling of freedom and autonomy that comes when, like Joyce, you can make or at least share in making your own decisions.

As a form of production, then, modernism (including the Great Artists and producers) gives off a message that has little to do with the content of the individual works: it is the aesthetic as sheer autonomy, as the satisfactions of handicraft transfigured.

Modernism must thus be seen as uniquely corresponding to an uneven moment of social development, or to what Ernst Bloch called the “simultaneity of the non-simultaneous,” the “synchronicity of the non-synchronous” (Gleichzeitigkeit des Ungleichzeitigen): the coexistence of realities from radically different moments of history — handicrafts alongside the great cartels, peasant fields with the Krupp factories or the Ford plant in the distance.

The history of early twentieth-century avant-gardes in the visual arts— easel painting stretching the limits of handicraft creativity in response to the new commercial technologies of photography, cinema and television — seems to confirm this diagnosis.

But the written word has been cheaply reproducible for centuries. The printing press was invented long before sound recording or disc pressing.

The writer of ‘independent means’ — beneficiary of family fortunes and legacies, of a gebildet European bourgeoisie happy to subsidize the artistic careers of its wayward sons — had dwindled in number by the mid-twentieth century, cancelled along with the aristocracy whose ‘high culture’ the business classes were trying to ape.

In a 1946 radio broadcast, E.M. Forster described the workings of this vanished world of Mann, Gide, Proust, Zweig and himself: ‘In came the nice fat dividends, up rose the lofty thoughts.’

He surmised, correctly, its obsolescence.

Suddenly needing to earn a salary, many writers were drawn into journalism, academia and marketing by the postwar expansion of higher education, entertainment media and advertising industries.

State bureaucracies, massively swelled by warfare and welfare state, absorbed others into officialdom and public administration. (Proust had recommended a comfortable, undemanding sinecure as the ideal occupation for an author.)

The result today is that all writers, even the most exalted, must resort to journalism or occasional teaching. Journalists are therefore tempted to suppose themselves writers — indeed the more successful, receiving grants from university, foundation or think tank, as interim scholars.

For writers, this coming down in the world reaches its culmination with the insistence, courtesy of a copyright lawyer at Google, that the notion of sole creative authorship has always been a myth. The ‘romantic’ notion of the author disguises the reality of artistic collaboration, bricolage and cheerful plagiarism.

Bleating about usurpation of the author’s property rights, he declares, is little more than moral panic.

(Of course, Patry rather misses the point: in commercial terms, appellation of authorship is akin to indication of geographical origin, e.g. of wine or cheese, an identifying badge which is recognized under the TRIPS Agreement as similar to trademark or certification.)

Today the ‘creative industries’ — so named by their publicists — are presented as a smart new engine of economic growth, the swelling revenue of Disney, Viacom, News Corporation, Comcast and Time Warner an example of twenty-first century conditions favouring the intelligent over the dim.

The ‘creative economy’ and ‘cultural industries’ are now topics of urgent reports by UNCTAD and UNESCO, not to mention a cottage industry of scholarship, popular publications and municipal boosterism.

Patent royalties, copyright fees, licence revenue, etc. — not to mention the income earned by lawyers and agents securing such arrangements — derive not from any new productive powers or technological innovations, but from asserting exclusive property rights, and thereby securing claim over a revenue stream.

The grotesquely concentrated market of book publishing — Pearson, Bertelsmann, Lagardère and a handful of other giant houses commanding the global scene — is exemplary.

Proletarianization of the author, as with the academic scholar, therefore signals not an explosion of knowledge, but its seizure and sequestration.

Along with prolonged copyright and trademark protection, the other half of the ‘creative industry’ business model is contributed by network externalities. Low costs of reproduction, and uniformity of customer tastes, allow multiplication of copies to any number of users.

The presence of more buyers raises the value of the original copy. With greater scale comes increasing returns.

‘Content’ production and transmission are therefore encouraged only to the extent they can be subdued and corralled by publishing platforms and distributors. The volume of writing solicited is unprecedented (e.g. content farms), but the channel clogged with noise (recycled articles, duplicated material). The proportion of people reading books of any type has declined.

Amid this scene, the pose struck by Franzen — himself as Voltaire or Maupertuis at Frederick the Great’s Prussian court — provides buffoonish relief.

What, finally, of Franzen’s panegyric of Obama as literary patron and cultural custodian?

One of the cherished fantasy-images of postmodern politics is that of an intelligentsia, hitherto a marginalized and downtrodden caste, restored to social prominence and installing one of its own in the chancellery.

Havel in Prague provides a euphoric example, as does the short-lived spectacle of ‘civil society’, journalists and economists in Poland and post-Soviet Russia, celebrating their own professional guild-values as foundations for a new society.

The ur-reference of these contemporary fantasies is 1848, when the poets and novelists of European romanticism — Manzoni, Petöfi, Mickiewicz — played starring roles for national movements in Poland, Hungary, Germany, Belgium and Italy. For mid-nineteenth century romantic nationalism, language was the bearer of heritage, providing a cultural basis for political unity.

Such rhetoric, now hopelessly archaic but guaranteeing a prominent role for the national bard (e..g Milan Kundera), was revived with the breakup of the Soviet Union and other multi-ethnic states, the return of private ownership dressed up as a Springtime of Peoples.

In the 1990s such visions spread outwards from the newly capitalist countries, an elixir to replenish the threadbare ideological cupboards of the old. Their compensatory function is obvious for European and North American intellectuals suffering the aesthetic degradation and social indignities of globalized advanced capitalism, as described above.

Reality is, of course, unkind to this daydream of a renewed social alliance between belles-lettres and state authority.

In such a scene, letters today barely sustain even a vestigial role as elite decoration or philanthropic point d’honneur.

Literature has, of course, rarely drawn the attention of wealthy patrons. It lacks the monumentality and civic resplendence of architecture; cannot offer the networking opportunities and social prestige of the opera house or gallery board of directors; easily duplicated, it does not yield the returns on investment of the one-of-a-kind painting.

Yet if sponsors have always been scarce, membership of the propertied classes has, in previous epochs, meant an obligatory amount of taste, learning, connoisseurship, and reverence towards literary matters.

Books were favoured as a luxury appurtenance, patronized and consumed for ornamentation and exhibitions of status, to be sure — but also were a matter of elite self-conception, recruitment and social functioning.

“But,” continued he, gaily, “pay your respects. What book do you think Napoleon carried in his field library? — My Werther!”

“We may see by his levee at Erfurt,” said I, “that he had studied it well.”

“He had studied it as a criminal judge does his documents,” said Goethe, “and in this spirit talked with me about it. In Bourrienne’s work there is a list of the books which Napoleon took to Egypt, among which is Werther. But what is worth noticing in this list, is the manner in which the books are classed under different rubrics. Under the head Politique, for instance, we find the Old Testament, the New Testament, the Koran; by which we see from what point of view Napoleon regarded religious matters.”

The three versions of this meeting (recorded by Talleyrand, Friedrich von Müller and Goethe himself) were recorded by Luise Mühlbach in her historical novel Napoleon and the Queen of Prussia:

Napoleon, continuing to eat, beckoned Goethe, with a careless wave of his hand, to approach.

He complied, and stood in front of the table, opposite the emperor, who looked up, and, turning with an expression of surprise to Talleyrand, pointed to Goethe, and exclaimed, “Ah, that is a man!” An imperceptible smile overspread the poet’s countenance, and he bowed in silence.

“How old are you, M. von Goethe?” asked Napoleon.

“Sire, I am in my sixtieth year.”

“In your sixtieth year, and yet you have the appearance of a youth! Ah, it is evident that perpetual intercourse with the muses has imparted external youth to you.”

“That is not a good tragedy,” said Napoleon. “Voltaire has sinned against history and the human heart. He has prostituted the character of Mohammed by petty intrigues. He makes a man, who revolutionized the world, act like an infamous criminal deserving the gallows. Let us rather speak of Goethe’s own work—of the Sorrows of Young Werther. I have read it many times, and it has always afforded me the highest enjoyment; it accompanied me to Egypt, and during my campaigns in Italy, and it is therefore but just that I should return thanks to the poet for the many pleasant hours he has afforded me.”

During the late Roman empire, Symmachus had declared in a letter that his senatorial elite were the ‘better part of the human race.’ Though idle and landed, Roman aristocrats had to be familiar with Virgil and Juvenal.

Such, indeed, was the cultural pedigree later drawn upon by bourgeois revolutionaries, for whom such distant treasures of the past remained legible, banners and elevated slogans to be salvaged from history, then used to embellish contemporary campaigns.

In France, said Marx, ‘the Revolution of 1789–1814 draped itself alternately as the Roman republic and the Roman empire’:

Camille Desmoulins, Danton, Robespierre, St. Just, Napoleon, the heroes as well as the parties and the masses of the old French Revolution, performed the task of their time — that of unchaining and establishing modern bourgeois society — in Roman costumes and with Roman phrases…

Once the new social formation was established, the antediluvian colossi disappeared and with them also the resurrected Romanism — the Brutuses, the Gracchi, the publicolas, the tribunes, the senators, and Caesar himself. Bourgeois society in its sober reality bred its own true interpreters and spokesmen in the Says, Cousins, Royer-Collards, Benjamin Constants, and Guizots; its real military leaders sat behind the office desk and the hog-headed Louis XVIII was its political chief. Entirely absorbed in the production of wealth and in peaceful competitive struggle, it no longer remembered that the ghosts of the Roman period had watched over its cradle.

But unheroic though bourgeois society is, it nevertheless needed heroism, sacrifice, terror, civil war, and national wars to bring it into being. And in the austere classical traditions of the Roman Republic the bourgeois gladiators found the ideals and the art forms, the self-deceptions, that they needed to conceal from themselves the bourgeois-limited content of their struggles and to keep their passion on the high plane of great historic tragedy.

Postmodern culture, of course, famously knows its own share of dress-up, pastiche and nostalgic revival.

Franzen’s grotesque embrace of Karl Kraus shows this: an example of nostalgia for the aesthetic, and of commercial culture’s wish to salvage from unprofitable ‘obscurity’ a peculiarly stringent and unassimilable modernism.

But — appropriately for a Restoration era that denies any future prospect of change — this decorative relationship to the past is enfeebling rather than stimulating. If it is to be drawn upon, any historical item must first be converted into a fashion plate, suitable for collection and ornamentation, the merest patina and embellishment.

The ‘past brought to life’ can involve little genuine connection to a shared cultural heritage, the latter now hopelessly remote and irrelevant. It follows instead the relentless, rhythmic turnover of the fashion cycle.

In an era of globalization, the market for children crosses national borders; witness the longtime flow of Americans who have gone overseas to adopt babies from South Korea, China, Russia and Guatemala.

Other than the United States, only a few countries — among them India, Thailand, Ukraine and Mexico — allow paid surrogacy. As a result, there is an increasing flow in the opposite direction, with the United States drawing affluent couples from Europe, Asia and Australia. Indeed, many large surrogacy agencies in the United States say international clients — gay, straight, married or single — provide the bulk of their business…

Together, domestic and international couples will have more than 2,000 babies through gestational surrogacy in the United States this year, almost three times as many as a decade ago.

What, if anything, is to be made of such developments?

A little more than a decade back, a succession of startling innovations in biotech, the turn of a new millennium and Clinton’s ‘New Economy’ boom together spawned a potboiling genre of fanciful prognoses, fretful futurology and journalistic speculation on the fate of the ‘body’, marriage and parenthood, and human reproduction.

This was a publishing bubble of airport literature and Kulturkritik, whichvariouseminences did not eschew.

Around the same time, Foucault’s ‘biopolitics’ was rediscovered by the Anglophone academy, a narrow seam contributing another rich source of mischief and vapidity for cultural studies.

In the midst of this scene, in 2001 Duncan Foley delivered a clear-eyed scholarly lecture on economic growth and demography. In it, he anticipated the new century bringing ‘opportunities and pressures’ for what he termed ‘reproductive arbitrage’.

The latter, he suggested, would ensue in a world where sub-replacement fertility prevailed in the ageing metropolitan economies, alongside a demographic floodtide of human misery elsewhere, as much of the globe experienced industrial growth insufficient to absorb its massive, stagnant ranks of young and prime-age people into employment.

This reproductive arbitrage — a ‘global market for children’, buying where cheap and selling where coveted, at a premium — would, he pointed out, be something novel.

The twentieth-first century, at its dawn, heralded a ‘sharp polarization between countries with rich ageing populations which cannot reproduce themselves and countries with poor, younger populations which are growing’:

Productive arbitrage opportunities will arise because the rich countries will have chronic shortages of labour and surpluses of capital, while poor countries will have chronic shortages of capital and surpluses of labour. Arbitrage suggests either the movement of capital to the poor countries through foreign investment, or the movement of labour to the rich countries through migration…

Reproductive arbitrage opportunities will arise because of the tendency for poor countries to specialize in producing children, as the rich countries specialize in producing wealth. Thus, we can expect an explosive growth in the trade in reproduction and its associated services like surrogate parenthood, adoption, and the provision of child-care services between older, richer countries and younger, poorer countries. We have also begun to see the early stages of this phenomenon already.

As its clients have multiplied, treatment of the gestational-surrogacy market bythepopularmedia has been equivocal.

Amid warm applause for the realization of parental dreams long held, misgivings are voiced, shortcomings admitted. Queasiness rarely rises, however, to the level of outright reproach, rejection or, least of all, investigation of underlying causes.

Prurience of the ‘Octomom’ variety carries its share of denunciation and spite, of course. But few right-thinking people would see fit to deny that the technology and ‘bioethics’ of assisted reproductive procedures are the chief matters at stake: philosophy, of a sort, rather than politics.

‘Regulation’, by vigilant international NGOs if not local authorities, is the prescribed salve.

(Not yet accustomed to the ways of the world, earlier journalistic treatment of ‘traditional’ surrogacy [insemination with sperm rather than embryo] was, in the 1980s and 1990s, rather more stringent in its scrutiny of market participants and their claims.)

Typically less given to delicate euphemism, the gurus and think tanks of the libertarian right have maintained a cautious silence on surrogacy’s cosmopolitan turn. Perhaps wary of upsetting a precarious apple cart, they are more likely to have found intervention unnecessary.

Inferences can, however, be drawn from past forthright statements.

In 1977, Judge Richard Posner notoriously proposed ‘legalizing a market for babies’. Affecting bemusement at the outraged response that greeted this calculated provocation, Posner observed in his own defence, and with some justification, ‘we have legal baby selling today… I simply think it should be regulated less stringently than today.’

The University of Chicago’s Richard Epstein, in a 1995 paper on surrogacy and contract law, complained that ‘condemnation of any transaction as “baby-selling” is all too often treated as a conversation stopper’.

A more phlegmatic outlook was called for.

Surrogacy’s ‘commercial aspects’ were ‘a regrettable but necessary part of transactions that yield enormous nonquantifiable benefits to the biological father and his wife, and to their friends and family who have comforted them during their years of anxiety and distress’:

The ability of individuals to handle these transactions with sensitivity and discretion is not precluded because money changes hands. Indeed the success of the venture may be aided if the money allows skilled professionals to ease the transition of both sides.

Meanwhile the industry of international adoption receives promotional services from the likes of Harvard Law School’s Elizabeth Bartholet.

Foley’s remarks were little more than an aside, to which, as the phenomenon he identified has since grown, detail can be added.

Relative prices and jurisdictional peculiarities play their part (an Indian surrogate at the most internationally renowned clinic in Gujarat is fortunate to receive a fee of $6500, some others as little as $800, while their North American counterparts fetch around $30 000. Merely donating ova, if their source is an Ivy League graduate, itself attracts $20 000).

Why is it that ‘poor countries’, seemingly so ill-suited for the task, should today have come to ‘specialize in producing children’ for the industrially developed zones of the planet?

‘Reproduction’ arises as a topic in classical political economy (Smith, Ricardo, Marx) because of the peculiar character of that productive input known as human labour.

The latter is not (as are capital goods) produced as a direct commodity via the capitalist system of production; nor (like land, minerals and similar resources) garnered freely from nature; but must instead be born, reared, trained and socialized, in the domestic household or elsewhere, before it can be hired on the market as employable labour-power.

Thus, in the view of classical political economy, labour supply is induced by demand, growing or shrinking according to demand for employees at a given real wage, caused by variation in productive investment.

Over a few decades, labour supply is flexible or elastic, because employers seeking workers may tap in to external sources (idle pools abroad drawn in as immigrants) or underutilized domestic sources (the unemployed, housebound women, etc.).

Conditions in the slums and shanty towns of today’s Delhi, Jakarta, Lagos, São Paulo, Karachi, Kinshasa, Dhaka, Istanbul and Cairo may thus be compared to those of Henry Mayhew’s London.

In early Victorian times, Britain’s industrial revolution had breached Malthusian limits, detonating population growth and urbanization that, for the moment, outstripped the pace of fixed-capital accumulation and demand for employees.

The London streets of 1840 therefore teemed with petty vendors and sole proprietors (fruit sellers, flower stalls, artisans, prostitutes) whose meagre inventories and simple tools of the trade were of a scale measly enough to be owned by a single precariously placed individual or family, hawked and peddled by day and carried home at night.

With the available workforce more plentiful than the needs of capital owners required, human life came cheaply and the necessities of subsistence were procured in haphazard, opportunistic fashion, as described vividly by Mayhew in London Labour and the London Poor, and captured in the crowded tenements of Dickens’s fiction.

Despite rapid growth in productivity, British real wages remained stagnant throughout the first half of the nineteenth century, its urban hordes preserving slack in the labour market.

In this metropolitan core, most strikingly in Europe and Japan, mechanization of production has caused output-capital ratios eventually to fall, as the stock of factories and equipment accumulates more rapidly than the number of available employees.

Beyond the frontiers of the OECD, however, in the less industrially developed countries whose populations comprise the overwhelming majority of the world, Mayhew’s vista of scrounged livings now predominates.

In today’s official lexicon, it is designated as the ‘informal economy’.

These marginal hundreds of millions of South Asian ‘self-employed’ and sub-contractors, whose low business revenue and few tangible assets makes them uncreditworthy to formal lending sources, provide the social infrastructure for those microfinance initiatives that so capture the hopes of well-meaning left-liberals abroad.

More importantly, such vast pools of urban misery — propelled out of the countryside by the Green Revolution, into cities where insufficient investment exists to draw them into paid employment — form a latent reserve of potential employees, thus keeping a ceiling on wage growth.

In Africa, South Asia, Latin America, West and Southeast Asia, low labour productivity corresponds to lesser capital intensity (fewer tangible assets used per worker), high output-capital ratios and a younger population.

Indian agricultural, construction, pottery and textile workers thus perform manual labour whilst their more productive counterparts abroad are assisted by machinery and equipment.

The capital-labour ratio in India is less than one-tenth its level in the United States. The resulting difference in labour productivity yields a stark income divergence: India’s average real wage is one-twentieth that of the USA.

In these circumstances, with the postcolonial prospect of secular ‘development’ and improved living standards having long since receded, those offering otherworldly salvation and similar religious consolations have naturally thrived.

In India, appeals to the devout, and invocations of Hindutva, have multiplied under the impeccably business-minded administrations of Rao, Manmohan Singh and Modi. BJP and Congress alike truckle to local piety while catering to foreign creditors.

Typically backed as an anti-left bulwark by the local security apparatus, favoured as a counterweight to unruly secular nationalism by imperialist intelligence services, and firmly planted in the soil of matrimonial and sexual conservatism, such confessional movements, of whatever stripe, have not looked favourably upon the entry of women to paid employment, female enrolment in public schooling and other novel social roles.

Yet the origin of commercial surrogacy in India, Thailand and the former Soviet republics is not simply the penury and devastation internal to these countries, enormous though these are.

The ‘market for children’ depends upon economic and jurisprudential developments pioneered in the advanced regions of North America, Europe, Northeast Asia and the Antipodes.

Trails of commodification are blazed in California.

There, the presence of an advanced biomedical-university complex, a favourable judicial environment, and cultural deregulation to make the rest of the United States blush, have placed the state at the forefront of proprietary and contractual developments governing human somatic material, as well as probate and family law (disputes over inheritance and parental rights).

And it is in the beating heart of world capitalism that a 1996 article in a legal journal could announce that gestational surrogacy had brought about the ‘demise of the unitary biological mother’. (Its author is now a ‘philanthropy consultant’ who ‘helps charities and brands secure celebrity support for cause-marketing campaigns and fundraising events.’)

This image of divided maternity (‘demise of the unitary mother’) furnishes an almost parodic example of the fragmentation that follows from commodification or rationalization, as described in the Marxist and Weberian traditions.

Once an activity (such as human sexual reproduction) is drawn into the sphere of production for the market, or a need is supplied as a commodity, the division of labour splits it apart into its specialized aspects or components.

‘Reproductive arbitrage’ and the ‘market for children’, therefore, are symptoms of what Arlie Hochschild calls the ever-advancing commodity frontier, the encroachment of commodity production and the capitalist sector upon ever more elements of human life.

Activities once performed by individuals or households for their own use, for satisfaction of their own needs — with both the labour and its output free of monetary cost — become services available for purchase on the market, in return for payment.

Few households now cultivate their own crops, educate their own children, spin and weave their own textiles, or construct their own houses. Responsibility for all these activities has been transferred to the capitalist sector or the state.

A few residual tasks remain for unpaid housework: the final stages of food preparation, childcare for infants and preschoolers, some custodial care of school-age children, cleaning of residential premises, etc.

The waning role of the domestic sector — and the transfer of production to a capitalist sector that can introduce efficient new techniques, raise productivity and realize economies of scale — has meant a degree of liberation from isolation and household drudgery, freeing up women for paid employment and other social roles.

Yet Hochschild, since The Managed Heart (1993), has drawn attention to recent new incursions by the market into the domestic household, in the fields of emotional intimacy, affective display and attachment.

With supply of these to customers now yielding a profitable return, employees, especially in ‘hospitality’ or service occupations, are obliged to convincingly demonstrate emotion: the solicitousness of the waiter, the empathy of the care worker, the conviviality of the flight attendant, the cheerful verve of the tour guide.

Emotional labour ‘requires one to induce or suppress feeling in order to sustain the outward countenance that produces the proper state of mind in others.’

Such transactions may be extended, Hochschild has noted, to ‘outsourcing’ (from independent contractors or employees) family functions traditionally performed by women, as mothers and wives, within the domestic household: cook, teacher, nurse, nanny, but also provider of emotional support, companionship and sexual partnership.

Intimacy may be purchased either in spot markets or by entering into long-term bilateral arrangements, with previous methods of attracting mates and forming pair-bonds having now dissolved or become too time-consuming,

A ‘familial role’ is ‘shown to be divisible into slivers, a whole separated into parts.’

Here, too, efficiency gains are made from turning tasks over to dedicated specialists:

Especially in its more recent incarnation, the commercial substitutes for family activities often turn out to be better than the “real” thing. Just as the French bakery may make bread better than mother ever did, and the cleaning service may clean the house more thoroughly, so therapists may recognize feelings more accurately. Even child care workers, while no ultimate substitute, may prove more warm and even–tempered than parents sometimes are.

Thus, in a sense, capitalism isn’t competing with itself, one company against another. Capitalism is competing with the family, and particularly with the role of the wife and mother.

Recoil, if it occurs here, is surely inspired not just by dread of the ersatz, but from threatened fulfilment of the bleakest Frankfurt School visions of the ‘exchange principle’ making human beings fungible and interchangeable.

To be sure, demands to revise the family, and disrupt standard reproductive arrangements, have long featured as a staple in visions of social transformation.

But surrogacy in a dingy Gujarati basement dormitory, or a gleaming Californian clinic, is far indeed from the sexual and matrimonial innovations proposed for Fourier’s phalanstère, Bacon’s New Atlantis or Campanella’s City of the Sun, let alone Firestone’s utopia of ‘artificial reproduction’ and parthenogenesis.

The relationship of gestational host to client is less novel than supposed, as made clear in an anecdote from Hochschild’s The Outsourced Self:

I didn’t want her to think of me as this big rich American coming in with my money to buy her womb for a while. So I did touch her at some point, I think, her hair or her shoulder. I tried to smile a lot.

Through the interpreter I told her, “I am very glad and grateful you are doing this.” I explained that we’d tried to have a baby but couldn’t. I told her not to worry for herself; she would be taken care of. I asked her about her own child.

She didn’t look at ease. It was not the unease of, “I can’t believe I’m doing this,” but more the unease of the subordinate meeting her boss.

World capitalism is capable of accommodating, and indeed of promoting, those survivals of domestic servitude and patriarchal terror that assist the growth of its latest production lines. The hereditary dynasty of Nehru, not to speak of the lineages of Bush and Clinton, attest to an official capacity for preserving the atavistic: inherited charisma, or family branding, at the head of the bureaucratic state.

Maintenance of a servile pool of Indian women (contractually denied any right to abortion, etc.) thus serves roughly the same social function as does existence of the idle, squandered two billion or so human beings wasting away, on standby, in the slum workshops of Asia’s informal sector, and in the continent-sized skid row of Africa: exiled from capitalist employment yet useful to employers.

They offer thereby a more limited scope for commodity exchange than do slave-owning societies. When the Roman civil code was rediscovered in the High Middle Ages, and used as the foundation for European commercial law, a good deal of antiquated material relating to trade in slaves had perforce to be discarded by the glossators.

However, it is a commonplace of Marxist thought that capitalist property relations tend, by their nature, to expand into every available territory, occupy each vacant line of production, and invade any vulnerable social nook. Commercial transactions, and property rights, thus tend to encompass more domains of existence than ever before.

This may be most apparent in the industries of health, physical embellishment and body transformation: repair, modification, procreation and enhancement.

While the United States’ National Organ Transplant Act (1984) forbids the sale or purchase of vital organs for ‘valuable consideration’, some philosophers have recently advocated legalization of payment for kidney sales; another salutes ‘commodification of human body parts’ and, indeed, ‘universal commodification.’

The attitudes of some, it has elsewhere been remarked, betray ‘an underlying fear of treating the human body, or the cellular material that will develop into a human being, as the personal property equivalent of cars or television sets.’ This, ‘although perhaps justifiable on moral grounds’, is unhelpful. All rights, ultimately, flow from proprietary interests.

Thus speaks the wisdom of the age.

It brings to mind the young Marx’s description of the ‘power of money’, which Adam Smith had said conferred the ‘power to command’ labour and the products of labour.

As the productivity of human labour increases, the fruits of the entire world are brought within the grasp of the wealthy, who can through spending remedy all deficiencies:

That which is for me through the medium of money— that for which I can pay (i.e., which money can buy) — that am I myself, the possessor of the money. The extent of the power of money is the extent of my power. Money’s properties are my — the possessor’s — properties and essential powers.

Thus, what I am and am capable of is by no means determined by my individuality.

I am ugly, but I can buy for myself the most beautiful of women. Therefore I am not ugly, for the effect of ugliness— itsdeterrent power — is nullified by money.

I, according to my individual characteristics, am lame, but money furnishes me with twenty-four feet. Therefore I am not lame.

I am bad, dishonest, unscrupulous, stupid; but money is honoured, and hence its possessor. Money is the supreme good, therefore its possessor is good. Money, besides, saves me the trouble of being dishonest: I am therefore presumed honest.

I am brainless, but money is the real brain of all things and how then should its possessor be brainless? Besides, he can buy clever people for himself, and is he who has power over the clever not more clever than the clever?

Do not I, who thanks to money am capable of all that the human heart longs for, possess all human capacities? Does not my money, therefore, transform all my incapacities into their contrary?

If money is the bond binding me to human life, binding society to me, connecting me with nature and man, is not money the bond of all bonds? Can it not dissolve and bind all ties? Is it not, therefore, also the universal agent of separation? It is the coin that really separates as well as the real binding agent.

The triumph of just war theory is clear enough: it is amazing how readily military spokesman during the Kosovo and Afghanistan wars used its categories, telling a causal story that justified the war and providing accounts of the battles that emphasized the restraints with which they were being fought.

‘Moral theory, said Walzer, ‘has been incorporated into war-making as a real constraint on when and how wars are fought.’ It was no longer merely a concern for clerics, jurists and professors, but of generals too. Just as the careful and delicate missile strikes of the first Gulf War had been an improvement on earlier bombardments of Korea and Vietnam, so NATO’s pummelling of the Balkans and Central Asia had granted ‘just war theory a place and standing that it never had before.’

Walzer – speaking at a New School conference alongside Richard Holbrooke, Michael Ignatieff, Samantha Power, David Rieff and Marty Peretz – denounced a ‘doctrine of radical suspicion’ that would ‘condemn and oppose’ any and all ‘American military actions.’ The role of the philosopher was not to carp and criticize from the sidelines, but was ‘internal to the business of war.’

Of course, the father of a scholarly sub-field is rarely a reliable guide to its future direction and preoccupations. But in 2002 little clairvoyance was needed to see that the war-legitimation business was about to expand.

These folks – they know who they are, even if you don’t – have helped to make it all possible, jurisprudentially and ideologically speaking. Preemptive strikes, ‘limited punitive actions’, the lot.

John Kerry’s big reveal, his statement laying out a casus belli for Syria, was strikingly desultory and underwhelming, even by recent standards. (‘I’m not asking you to take my word for it. Read for yourself, everyone, those listening, all of you, read for yourselves the evidence from thousands of sources, evidence that is already publicly available.’)

Essential for the State Department’s ‘credibility’, therefore, were the prior efforts of policy intellectuals straddling academia, journalism and the security state. Despite the implausibility of Kerry’s claims and the listlessness of his performance, respectable public opinion has long since adopted the view that power projection and military expeditions in the name of human rights and ‘international norms’ are, after all, a rather airy and vaporous business, in terms of actual legal constraints or even normative prohibitions.

Territorial sovereignty, these intellectuals have insisted – post-Desert Storm, post-Walzer – does not necessarily bind or impede the activity of other states. It is instead a conditional licence granted to lesser states by powerful ones – that is, by the ‘international community’.

Its transgression does not per se make a crime: its revocation may be warranted, at some moments, if Washington so desires.

My view is that crimes of aggression are deserving of international prosecution when one State undermines the ability of another State to protect human rights [i.e. only under this particular condition].

This thesis runs against the grain of how aggression has been traditionally understood in international law.

Previously, it was common to say that aggression involved a State’s first strike against another State, where often what that meant was simply that one sovereign State had crossed the borders of another sovereign State. In this book I argue that the mere crossing of borders is not a sufficient normative rationale for prosecuting State leaders for the international crime of aggression.

At Nuremberg, charges of crimes against humanity were pursued only if the defendant also engaged in the crime of aggression. I now argue for a reversal of this position, contending that aggression charges should be pursued only if the defendant’s acts involved serious human rights violations. Indeed, I argue that aggression, as a crime, should be defined as not merely a first strike against another State but a first wrong that violates or undermines human rights.

If there are to be prosecutions for crimes against peace (or the crime of aggression) that are similar to prosecutions for crimes against humanity and war crimes, then there must be a similarly very serious violation that aggression constitutes. Mere assaulting of sovereignty does not have the same level of seriousness and is not as universally condemned as are the other crimes. For this reason, among others, I argue that aggression, as a crime, needs to be linked to serious human rights violations, not merely to violations of territorial integrity.

[…]

If a given State is not generally protecting human rights, it will be less clear that war waged against such a State is indeed best labeled aggressive and unjustified war. Indeed, if States systematically violate the basic human rights of their citizens, then those States have no right to insist that other States respect their sovereignty

[…]

Of course, there are States that have been massive violators of human rights, and wars waged to stop such States are not generally aggressive in my view.

[…]

What made the Nazi case stand out was the scale and viciousness with which it was fought, not that it was a case of aggression. So, the value of Nuremberg as a “precedent” for future trials of leaders for aggressive wars is here also unclear.

[…]

It is odd indeed to call the humanitarian actions of a State by the name “aggression” since that implies that there is some hostility behind the intervention. If the intervention is truly motivated by humanitarian concerns, then calling it aggression and therefore also hostile seems out of place.

It is also hard to see that humanitarian interventions constitute wrongs at all, let alone the most important of wrongs in the international arena, and hence we have reason to think that crossing State borders is not always wrong. Humanitarian intervention may indeed often be ill advised since anything that contributes to the horrors of war is to be avoided at nearly all costs. But if the motivation for the humanitarian intervention is to stop genocide, then the war may not be ill advised even though there is a serious risk of the major loss of civilian life that occurs in most wars. Here we might do some rudimentary utilitarian calculation to see that stopping genocide by means of a war could be justified.

Humanitarian wars can at least be prima facie defended in such circumstances as the genocide in Darfur. Such wars might be technically aggressive – at least, according to traditional doctrine – in that they involve invasion by one State against another State that is resisting rather than consenting to the invasion. Yet, since no “hostility” motivates the invading State and the international community in effect consents to allow the invasion, it seems as if the designation of aggression is the kind of technical characterization that doesn’t bear much normative weight. Aggression, as traditionally understood, is not itself a trigger of normative disapproval; some aggression, such as that form that stops worse aggression, could be a very good thing indeed, as theorists from the Just War tradition and contemporary international law have claimed. This is one reason I urged that we abandon the traditional way of understanding aggression.

Yesterday’s Financial Times included an interview by its Beijing correspondent with a senior executive from the Chinese-owned engineering and construction firm Sinohydro.

In it, Wang Zhiping lamented the billions in dollars of asset write-downs and lost contracts suffered by Chinese firms due to ‘political instability’ (i.e. US-promoted regime change, state failure and secession) in Libya, South Sudan, Mali, Central African Republic, Iraq, Afghanistan and Burma.

The article described the fallout – regrettable and inadvertent, of course – from NATO’s recent African military ventures, diplomatic intrigues and assertions of force majeure:

After years of expansion into emerging markets and developing a reputation along the way for taking on projects in difficult environments, experiences such as Libya are prompting a change in the way that Chinese companies assess risk. The shift – backed by Beijing – comes as Chinese companies increasingly compete with Bechtel, Hyundai Engineering, Leighton and other international contractors.

[…]

Chinese engineering companies last year produced $117bn in revenues from contracts outside China – a 10-fold increase over the past decade, according to the Chinese government. Five of the world’s top 10 contractors are now Chinese, according to the Engineering News Record, a trade publication.

While Chinese contractors can compete on technological prowess, they face a big challenge dealing with political risk, particularly after hard-earned lessons through kidnappings in war-torn areas such as Libya, Mali and Afghanistan.

Many of Sinohydro’s overseas projects – from mines and roads to power stations and football stadiums – are funded by Chinese loans to the host country, which are repaid with resources such as crude oil.

While the model has helped cash-strapped governments that might otherwise shy away from building needed infrastructure, it has left Chinese companies exposed in many of the world’s conflict zones.

[…]

“If the risks are too high, we just won’t go there [now],” says Mr Wang. “Our greatest concern is the instability caused by political risk in overseas markets, including armed conflict.”

Sinohydro’s “caution” list includes Iraq, Afghanistan and Myanmar, where the military junta unilaterally suspended a $3.6bn hydropower project in 2011.

“That is just an estimated figure,” says Mr Wang. “For other losses, like how many cars have been blown up, or exact losses for every physical asset . . . it is very difficult to get an exact number.”

Sinohydro has also been caught up in other conflicts with workers either killed or kidnapped in South Sudan and Afghanistan. And the current conflict in Mali threatens to jeopardise one of its hydropower projects.

To these methods of overt US aggression and tortious interference, we can add the less objectionable corporate watchword of ‘green growth.’

In recent years, the need to invest in ‘clean energy’ has supplied public justification for the state-assisted efforts of US- and European-owned engineering, construction and mining firms (Bechtel, ABB, RWE, Skanska, etc.) to secure infrastructure contracts and fasten down supplies of raw materials ahead of their Chinese competitors.

But these developments aren’t the real interest of the FT article.

What the piece quietly makes clear is that the sovereign needs of the US government now conflict with the system-wide needs of world capitalism.

Washington’s exercise of its imperial power is no longer the benevolent, positive-sum game of yore, in which it could pursue its own interests while acting as guarantor of private property rights, monopolizer of force, keeper of civil peace and manager of the global division of labour on behalf of the world’s governments and propertied elite.

Rather than satisfying the wishes of the world’s investors for stable political institutions, Washington’s need to maintain strategic pre-eminence now leads it to create continent-wide zones of political turmoil, state failure, secession and insecure property rights.

Amidst such basic uncertainty, where one’s assets may be seized, obligations repudiated or agreements turn out to be worthless, how is stable global accumulation possible?

Now, your Honor, I have spoken about the war… For four long years the civilized world was engaged in killing men… It was taught in every school, aye in the Sunday schools. The little children played at war…

We read of killing one hundred thousand men in a day. We read about it and rejoiced in it – if it was the other fellows who were killed. We were fed on flesh and drank blood. Even down to the prattling babe. I need not tell your honour this, because you know; I need not tell you how many upright, honourable young boys have come into this court charged with murder, some saved and some sent to their death, boys who fought in this war and learned to place a cheap value on human life. You know it and I know it. These boys were brought up in it. The tales of death were in their homes, their playgrounds, their schools; they were in the newspapers that they read; it was a part of the common frenzy – what was a life? It was nothing. It was the least sacred thing in existence and these boys were trained to this cruelty.

It will take fifty years to wipe it out of the human heart, if ever…

Your Honor knows that in this very court crimes of violence have increased growing out of the war. Not necessarily by those who fought but by those that learned that blood was cheap, and human life was cheap, and if the State could take it lightly why not the boy?

The reverence was familiar for ‘an oration critically acclaimed as Australia’s version of Martin Luther King’s “I have a dream” civil rights speech’, a ‘seminal moment in reconciliation [that] has reverberated for two decades because of the power and poetry of its words.’

If, it was conceded sombrely, subsequent years have not realized ‘our’ hopes of 1992, there nonetheless was abundant consolation in the high values themselves, and in the ‘progress’ of their expression by a mature and enlightened member of the Australian ruling elite.

Journalistic effusions find an echo in the world of scholarly ideas. (The workaday proximity of journalists and talking heads to political leaders, alongside the maintenance of a steep social gradient between them, typically spawns infatuations that are both more intense and less durable than the mutual complicities and common enterprises shared by academic clercs, court philosophers and governing elites.)

In his celebrated book The Guilt of Nations (2000), historian Elazar Barkan described the emergence during the 1990s of a ‘new international morality’, now a ‘major part of national politics and international diplomacy’.

This liberal internationalism, given human form in the shape of Clinton, Schröder, Chirac and Blair, had developed between the 1950s and 1970s but only achieved full flowering after the Cold War, when minority sensitivities could be indulged and ‘national self-reflexivity’ afforded.

Under a ‘new globalism that pays greater attention to human rights… international public opinion and organizations are increasingly attentive to moral issues.’

NATO airstrikes in Yugoslavia were one symptom of this ‘growing moral fervour’, founded in a ‘desire for moral politics’ and the ‘growing democratization of political life.’

Group rights were distinct from (irreducible to) the rights possessed by individuals who composed the group.

In this ‘negotiation’ between liberalism and communitarianism, the solid ‘inner core’ of liberal rights was swathed and adorned, and thereby enriched, by ‘local traditions and preferences’, the ‘place of the community’ and ‘group cultures’, ‘particularities and identities’.

Yet the existence of group rights had until recently been ignored — not least by the original ideologues of liberalism. Community rights had thereby been violated (recognition of a group’s traditions being one such collective entitlement enumerated by Barkan).

This raised the matter of collective guilt for national wrongdoing.

For, as a bearer of rights might also acquire corresponding duties (e.g. to uphold the rights of others), so collective desert implied the complementary possibility of group liability arising from group violation of rights.

Ideological coherence, rather than logical necessity, thus ensured that the crediting of one nation’s account would be balanced by a matching debit in another national ledger. An entire nation or other group could acquire guilt for having inflicted historical injuries on other nations or ethno-regional groups.

As one of the few attempts to historically situate the entry of ‘national guilt’ into approved usage, Barkan’s book had several merits.

Yet, for all this, it was a complacent and Whiggish piece of a sadly familiar sort.

In attributing the behaviour, public pronouncements and military interventions of state leaders to the diffusion of some new ‘international morality’, a global ecumene of human rights, it gave a Panglossian rendering of contemporary imperialism that bears comparison to Steven Pinker’s recent fluff.

More serious examinations are needed of the ideology of national guilt. (Since Barkan’s book, prominent academic work on the topic has been the domain of psychologists such as Nyla Branscombe and Bertjan Doosje and, in Australia, Martha Augoustinos and Amanda LeCouteur.)

Chimera it may be, but the distorted vision itself requires explanation.

What social foundations account for this common currency of respectable thought?

What historical circumstances license the extension of personal blame, incurred for wrongdoing by an individual, to a group or multi-person entity of which that person is a member?

(This is distinct from the question of indemnifying the individual for injurious acts committed ‘under orders’ or as part of a concerted group project.)

Here Barkan’s book, and the example of the limited-liability corporation, provide a clue.

Both provide historical evidence for the adaptability of ideological constructions to institutional changes: any entity may plausibly be characterized as a moral agent after it has been recognized as a legal personality or subject of right.

For example, the evolution of juridical categories has allowed recognition of the corporate enterprise as a legal person, distinct from its owners or those who founded it, persisting after its original members have departed. This status means the corporate enterprise is capable of buying, selling and owning property, entering into contracts, incurring debts, exercising certain rights and acquiring duties, suing and being sued in its own corporate name, etc.

Juridical accommodation, of course, merely granted legal sanction, ex post, to economic ‘facts on the ground’ (i.e. that a business enterprise was an accounting unit, an independent entity that could undertake transactions with external parties, and had its own balance sheet of assets and liabilities).

Economic and legal developments were then quickly filled with ideological content, the three levels bevelling smoothly to neutralize any potential difficulties arising from the institutional change.

Thus the economic subject — the bearer of property rights — could be represented as a natural artifact, as timeless as a person or a household, rather than as the historical product of contingent economic institutions.

Much as capitalism was personified, during more primitive times, in the heroic and risk-bearing individual ‘entrepreneur’, so the corporate firm has been anthropomorphized.

The corporation as an entity has been discovered to share various characteristics with natural persons: it is capable of wrongdoing, entitled to free expression (‘commercial speech’) and due process, etc.

The PR nostrums of ‘corporate social responsibility’ and ‘business ethics’ spring from this unpromising soil.

With the rise of the corporate enterprise, the depersonalization of property was accompanied by the personalization of the property holder.

Similarly, when one nation or ethno-regional group may be said, for example, to hold title to property, to have a common seal, to endure in perpetuity, and to be capable (via an incorporated entity, trust, etc.) of entering into enforceable contracts with another party for the transfer or use of that property, it may also coherently be said that another nation or group, having long ago (through its agents) unilaterally seized or confiscated this property ‘for itself’, has collectively abrogated the rightful possession of the first party and thereby committed a wrongful act for which it is morally responsible if not legally culpable.

Hence, in his Redfern speech, Keating’s description of colonial plunder persistently used the anonymous national pronoun: ‘We took the land.’

Thus Gareth Evans, introducing Keating’s Native Title bill to the Senate in 1993:

We do owe our indigenous peoples, our Aboriginal and Torres Strait Islander fellow Australians, a huge debt for the destruction and dispossession that we non-Aboriginal Australians wreaked for over 200 years of Australian history. I hope that, by passage of this legislation tonight, we have repaid just a little of that debt.

Features of contemporary society are thus projected backwards onto the history that preceded them: groups or nations, having become subjects of economic right in the present day, are observed to have been so all along (the corporate form presumes they live infinitely). And, the distribution and re-distribution of property having been everywhere a bloody affair, this status can be shown to go along with that of moral agency.

The nation, like the corporate enterprise before it, is ‘personalized’. And, with a regretful shrug, it is declared that one nation’s deprivation today follows, obviously enough, from another’s pilfering yesterday.

As with the pieties of corporate ‘conscience’, personalization of the ethnos as a collective moral actor is treated, for the most part, as a salutary event. At any rate, its consequences for contemporary ideology and PR are not nugatory. It provides a set of shared premises and underpinning assumptions for polite opinion, and supplies intellectual justification and publicity cloak for official policy.

A vision of the nation as moral agent allows legal scholar Martin Krygier and Robert van Krieken to remark, approvingly quoting Keith Windschuttle, that ‘the debate over Aboriginal history goes far beyond its ostensible subject; it is about the character of the nation’:

We are members of a nation… [We] did come here and we did some things and not others. We must come to terms with what we did.

(And with that has come a variety of unpleasant paraphernalia, most notably tit-for-tat games of reciprocal plunder in central African ‘ethnocracies’. Here ‘restitution’ or compensation for past wrongs is frequently advanced via collective retribution, criminal sanction, asset seizures or punishment conditioned on ethnic identity, lineal descent or kinship, language or residence in a specific territory.)

Official recognition of ethnically-based ‘group rights’ typically involves the assignment of property rights to land, minerals or other scarce resources. (Meanwhile, culture may ‘congeal into a naturally copyrighted, legally protected collective possession; in other words, into genetically endowed intellectual property.’)

These rights may be used by group leaders to bargain for a share of the rent yielded by production using those resources. Access is exchanged for ‘benefit-sharing’ (as royalty payments, licence fees, etc).

The degree of bargaining leverage, and thus the share of rent captured, may be increased by appealing to nationalism, collective guilt and other supporting ideologies.

The Australian case can stand as an example. Group-based property is vested in a registered trust or other incorporated entity. These bodies are legally administered, ‘on behalf’ of communal owners, by salaried functionaries.

Senior figures enjoy a measure of managerial autonomy. This includes the right to negotiate, on behalf of the ‘community’, ‘informed consent’ deals with mining companies, to grant licences and commercial lease agreements (e.g. Indigenous Land Use Agreements and Native Title Agreements), and thus to strike rent-sharing contracts.

Bargaining power depends on what, in practice, are veto rights over commercial development (though they can be overruled on grounds of ‘national interest’).

Negotiated agreements channel into trust funds a portion of revenue from mining operations. A large share of these payments (>40%) is allocated to administrative expenses (consultants, legal advice, operational budgets, etc.) which the organizations can be expected rationally to maximize.

In 2010-11, for example, the Northern Land Council received a royalty-equivalent payment of $28 million from the Aboriginals Benefit Account (ABA) to cover administrative costs, and $9 million from the ABA for onward distribution to royalty associations.

That financial year the Northern Land Council spent $2 million on consultants and $3.3 million on travel expenses. The Chief Executive Officer received total remuneration (including salary, spending allowance and performance bonus) of $172 000. The five other senior executives each received salaries of $126 000.

This middle class of salaried bureaucrats and contractors absorbs a portion of the social surplus product.

As living standards for the vast, propertyless majority of Australian Indigenous people have stagnated or declined, the transfer of wealth and influence to these functionaries and ‘group representatives’ has been the material basis for ‘reconciliation.’

Ideological support for this transfer, on the other hand, comes from various pieces of bienséance that since 1991 have gone by the name ‘reconciliation’.

The latter is an official project to build ‘partnerships’ between Indigenous people and ‘the wider community’ (sic), weaving ties between Aboriginal and Torres Strait Islander ‘representatives’ and ‘government, business, peak organizations and community groups.’

The origin of this enterprise may be dated to the third quarter of the twentieth century, and to policy decisions made by loyal servants of the Australian state from across the partisan spectrum: Hasluck, Beazley, Woodward, Barnes, Coombs, Chaney.

Suitably repackaged for public consumption in the progressive language of ‘self-determination’, state policy won warm support from the periphery of the political world (lobbyists and well-meaning ‘activists’). It was fortified by the intellectual connivance of eminent academics (anthropologist Bill Stanner and ‘left-wing’ historians Henry Reynolds and Ann Curthoys). Several of the latter were patronized by Kirribilli House and launched by the mainstream media.

The sycophancy with which Keating is unanimously remembered says much about this milieu.

Key flanks secured, the project won the public battle of ideas easily, attracting a nod of legitimacy from editorial sages, press commentators, talking heads and nearly the entire spectrum of respectable opinion. (The noisy opposition of sectional interests, though intermittently useful enough to be given a public airing, was weak; it exercised little independent influence on state policy.)

Tactical disagreements on the scale and destination of disbursements aside, elite comity reigned on the need to share the spoils. With conventional wisdom thus debauched, the broad population was easily disoriented.

Collective guilt forms a central thread in the ideological tangle that Mick and Patrick Dodson like to call, in a telling phrase, Australia’s ‘unfinished business’. The nature of this venture may be gleaned from the elder Dodson’s remarks about the Northern Territory ’emergency response’, made upon accepting the 2008 Sydney Peace Prize.

Hailing the previous day’s electoral coronation of Barack Obama, Dodson declared the ‘need for consultation, negotiation and partnership in dealing with any sector of the Australian community on whatever the issue.’ The NT intervention, in particular, was ‘pre-emptive, non-negotiated… crude, racist and poorly considered public policy.’ The government needed to ‘enter into a dialogue and negotiation over the nature of the engagement’.

Thankfully the Rudd Labor government, like Obama’s incoming administration, was seeking to avoid such ‘administrative disasters’, by recruiting accomplices from within ‘the Aboriginal Community’ to collaborate in ‘planning and implementation’ of such strategies, and in ‘governance delivery.’

‘Aboriginal-controlled organisations’ must have ‘roles in the delivery of the communications, education and social revolution’. Meanwhile aspiring young community leaders should ‘look to where they might maximise their participation in the strategies being put together by Industry and Government’.

The historical pre-condition for the emergence of Barkan’s ‘neo-Enlightenment mentality’ — the enlargement of the liberal framework to include ‘the place of community’ and group rights — was a political development whereby national membership came to endow people with ethnically-based claims to wealth (e.g. in which a ‘tribe’ could claim ‘customary’ ownership of a tract of land or other scarce resource).

Since the birth of industrial capitalism, technical innovations have meant a growth in the scale of production. This has been accompanied by gradual ‘de-personalization’ of the holder of title to property (the subject of right).

Not all of these entities possess the attributes conventionally required for an entity to have moral responsibility: the capacity to hold beliefs, form goals, make decisions or undertake actions.

This does present problems for smooth ideological functioning of the existing system. The attempt to align economic, legal and moral categories cannot be supported, except by theoretical legerdemain or speciousness.

Here the categories of moral agency and legal personhood strive vainly for the flexibility of mainstream economics. For the latter, the rational agent may be ‘sometimes an individual, sometimes a household, sometimes a firm, sometimes a nation, and so on, depending on the demands of the problem and modeling convenience.’

Yet, though the nation may be an object of solidarity, affiliation and identification, it lacks any of the features that qualify the category for moral agency.

Under the terms of the ’employment relationship’, the employee agrees to surrender, for a specified period, disposition over his labour. Having hired out his capacity to work, the employee must carry out the commands of the employer or managerial agent:

We will say that B [the boss] exercises authority over W [the worker] if W permits B to select x [a ‘behaviour,’ i.e., any element of a set of ‘specific actions that W performs on the job (typing and filing certain letters, laying bricks, or what not)’].

That is, W accepts B‘s authority when his behaviour is determined by B’s decision.

In general, W will accept authority only if x0, the x chosen by B, is restricted to some given subset (W’s “area of acceptance”) of all the possible values. This is the definition of authority that is most generally employed in modem administrative theory.

Employees are contractually obliged to obey or comply with the directives issued by owners or their managerial delegates, whose commands have presumptive validity. The owner, who is the residual claimant of the firm’s profit (income net of wage and other input payments), is given the legitimate right to exercise authority and make decisions concerning the use of the firm’s income-generating assets or capital goods.

The administrative hierarchy of other multi-person organizations (e.g. armed forces) is organized according to a similar pyramidal command structure. It is this unified decision-making structure that makes it reasonable to attribute responsibility for actions to the collective entity rather than the individual atoms of which it is composed.

Thus we can say e.g. ‘the German Sixth Army beat a hasty retreat’ or ‘BP fouled up the Gulf of Mexico.’

It is of course possible to attribute causal powers to other types of social aggregates or collective actors — besides military command structures, business enterprises with administrative hierarchies, and incorporated legal persons that can enter into contracts and incur liability.

In some circumstances, the existence of certain institutions (e.g. political parties or strike committees) or a commonality of social position or interest (due to comparable degree of wealth) may give rise to group deliberation and conference, the emergence of decision procedures (whether formal or informal), mutual cooperation, collusion, unitary organization, the concerted exertion of purposeful effort, and joint action in pursuit of shared goals.

Thus it is permissible to say ‘the Pittsburgh Steelers won the Super Bowl’, ‘the German propertied classes turned in desperation to Hitler’, ‘the orchestra played well’, ‘the financial elite demanded that priority be given to low-inflation policies’ and ‘the gang knew they were done for after getting caught burgling the jewellery store’.

On the other hand, the idea, voiced publicly with increasing confidence throughout the 1990s, that a nation or an ancestral or ethno-regional group is a kind of supra-individual actor, a moral agent with responsibility for its actions, is neither philosophically respectable nor long-standing. (Here George P. Fletcher borrows Searle’s concept of ‘we-intentionality’ to argue for the idea).

With the group becoming the primary object of affiliation and identification, broader political alliances based on common social positions are precluded, to the benefit of careerist nationalrepresentatives. Ethnic tensions displace class antagonisms. Frustrated hopes for economic security and social betterment may then be consoled and redeemed by symbolic victories.

Thus, whatever their obedient publicists in the media and academy may say, group-based rights and the ideology of collective guilt have helped to sustain the widespread misery of Indigenous Australians (mass unemployment, incarceration, absent services, missing infrastructure, low life expectancy), rather than remedying or ameliorating it — or merely failing to do so sufficiently.

The latter deplorable circumstances persist because of them, in part, rather than despite them.

Now officially consecrated in public memory, their emergence in establishment discourse merits no fond elegies.

The previous post considered advice, courtesy of Joe Biden, that videogame firms should try to ‘improve their public image’, presently mottled by various ‘kinds of evidence linking video games to aggression.’

Impressions, it seems, are everything: videogames firms don’t ‘necessarily need to change anything they’re doing,’ but must instead focus on ‘how they’re perceived by the public’.

The owners and managers of a business enterprise naturally want to preserve their full dominion over its assets, and the prerogatives (and cash flow) that follow from it.

Thus firms regularly are obliged to undertake defence of a product or activity that, while profitable, also poses a risk or hazard to consumers, employees, the environment, the assets of other firms, etc.

Restrictions on the prerogatives of ownership include many types of government regulation: quality standards, labelling laws, health and sanitation laws, zoning ordinances or land-use restrictions that limit where commercial and industrial structures may be built, commercial licences that control who and where people may operate businesses, minimum-wage laws, anti-discrimination laws, pollution control and monitoring by environmental protection agencies, occupational safety and health regulations, taxation or eminent domain, and establishment of civil remedies.

In ordinary circumstances, it must be said, any hazardous byproducts (negative externalities or ‘market failures’) arising from economic activity, while of course regrettable, are hardly prohibitive.

Both tort law and government regulation aspire to an ‘efficiency standard’, balancing the costs arising from some commercial activity or product against its benefits.

Broadly speaking, if the increment in profits outweighs the decrement in human lives or environmental amenity, according to some arithmetic, the tradeoff is deemed ‘worth it’.

But, in rare circumstances, official opinion may decide that the troublesome product or activity imposes excessive or intolerable burdens upon the state (e.g. medical costs, political instability), upon other special interests (e.g. insurance providers) or upon a powerful and broad social constituency (e.g. the propertied classes as a whole, through higher wage bills or loss of legitimacy for existing social institutions).

In such cases, particular business interests may be sacrificed for the ‘greater good.’ The state may impose regulations limiting the full exercise of property rights, restricting what the offending owners may do with their assets or how their enterprises operate.

Thus the need for corporate ‘product defence’ campaigns.

These are deployed, permanently in some industries, to dispel alarm and forestall the threat of damaged business interests from lower sales revenue, product liability claims, government regulation or outright prohibition.

These ‘merchants of doubt’, co-opted or career, were set up in well-apportioned Potemkin institutions for phony research. Their task was ‘establishing a controversy at the public level’, where no such equivocation existed at the level of peer-reviewed science.

Viscusi argued that the addictiveness of cigarettes, as measured by smokers’ responses to rising prices, was comparable to ‘consumer products that people generally do not consider addictive, such as theater and opera, legal services, and barber shops and beauty parlors.’

And anyway, he added, premature deaths caused by smoking save the government the cost of pensions and nursing homes.

Videogame firms face a similar need to defend their product against the risk of regulation, damaging criticism, penalty or suppression.

This is because ‘programmers and advertisers may not take into account the full costs to society of the show they schedule or support.’ Such costs include the desensitization, increased aggression and fear experienced by audiences, particularly children.

So defined, and according to the conventional prescription of ‘public policy’ experts, this means that the remedies for media violence must be similar to the solutions for environmental pollution: zoning (e.g. for broadcast TV, ‘shifting violent programs to times when children are less likely to be in the audience’) or taxation.

Thus several jurisdictions, including the state of California, have attempted to prohibit the sale of violent video games to minors.

But the response by videogames firms has been different from that followed by cigarette manufacturers and oil corporations.

Certain features of the product itself and the market for video games, as described below, make it less necessary for firms to directly fund ‘product defence’ by bought-and-paid-for researchers and centrally directed think tanks (which these firms nonetheless do finance).

For several reasons, which are outlined below, the advocacy service is already provided at close to zero expense — by ideologists, consumers, other segments of the mass-communications media and academics.

The latter constitute, I will argue, a decentralized ‘epistemic community’ of like-minded people and linked institutions. Shared incentives (and self-conscious group identity) motivate them to adopt similar beliefs about the harmlessness of violent video games, ignoring (for both psychological and commercial reasons) available information that disconfirms such beliefs.

But the first reason can be dealt with briefly, since it is least relevant to my point in this post.

Any statement regarding the harmfulness of video games products can simply be trumped (in the US) by brandishing the First Amendment, thereby activating the professional guild values of journalists and academics.

A seemingly dispositive argument can be made that commercial videogames are constitutionally-protected speech, including when addressed to minors and involving extreme violence. Thus their sale is immune from restriction or impediment, ‘even where protection of children is the object’ (Antonin Scalia).

In 2011 it was supported 7-2 by the Supreme Court in Brown v Entertainment Merchants Association, striking down the Californian statute.

Since ‘there is no exception for violence’, voluntary ‘self-regulation’ by the industry and ‘parental empowerment’ are the only responses available to ‘what some people think is offensive’ (legal counsel for Michael Gallagher, president of the Entertainment Software Association).

If so desired, the syllogism may be extended to a broader claim: any critical scrutiny of a ‘creative’ product violates the First Amendment rights of its maker.

A recent example appears in the breathtakingly disingenuous statement issued by a Sony Entertainment spokeswoman, in response to criticism from within Hollywood of Kathryn Bigelow’s Zero Dark Thirty: ‘The film should be judged free of partisanship. To punish an artist’s right of expression is abhorrent. This community, more than any other, should know how reprehensible that is.’

A second feature of video games is much more important in explaining why the industry’s PR defence occurs, in large part, without the involvement of centrally organized or directly paid agents.

When this feature is present, the value or benefit of a product is increasing in its popularity or number of users. Additional users make the product more valuable or appealing (i.e. increase the willingness of buyers to purchase it at the going price).

Sellers duly profit from this cascade or bandwagon effect.

With videogame consoles and other platforms, an increase in the number of one type of user (customers or game players) increases the number of another type of user (content providers or game developers).

Buyers of videogame consoles want games to play on; game developers pick platforms that are or will be popular among gamers…

Videogame platforms, such as Nintendo, Sega, Sony Play Station, and Microsoft X-Box, need to attract gamers in order to convince game developers to design or port games to their platform, and need games in order to induce gamers to buy and use their videogame console. Software producers court both users and application developers, client and server sides, or readers and writers. Portals, TV networks and newspapers compete for advertisers as well as “eyeballs”. And payment card systems need to attract both merchants and cardholders.

The console firms design and manufacture hardware, then contract out to independent game developers to provide games for the platform (as well as producing their own in-house titles). They may finance the developer’s large fixed costs.

The independent developer pays a fixed fee to the console maker for use of proprietary software development tools (the ‘devkit’), then also pays a per-unit licensing royalty on sales. These IP royalties, a form of rent, are the principle source of profit for the console producers and ‘publishers’.

This means that the value of other products is increasing in the popularity of video games. These complementary products find their usefulness to buyers is enhanced as video games themselves have more buyers.

For example, growth in the number of users of particular software increases the attactiveness of a complementary component, console or other hardware — such as an HD TV, a speedy Internet connection or PC, a new handheld device and so on.

There are spillovers across markets: the more buyers a game has, the more attractive becomes brand or merchandise tie-ins, the more advertising and games journalism can occur, the more likely becomes permission to use proprietary material (music and film) in return for per-unit royalty fees, and so on.

Sony famously tried to increase demand for its Blu-ray discs, and its revenues as a movie studio, by bundling a Blu-ray player into its Playstation 3 console.

[As] gamers know – and economists have confirmed — the demand for great video and computer game experiences also drives sales of complementary products and services, such as for broadband and high-definition TVs. Our industry stimulates complementary product purchases of roughly $6.1 billion a year in the U.S. alone. These purchases are also spread around to businesses large and small.

Network externalities mean that the greater the number of consumers purchasing and using video games, the larger is demand in several other distinct markets.

This includes others not mentioned by Gallagher, among them the mass communications media, advertising, journalism and other opinion-making fields. All may experience mutually increasing demand for their product as the number of people adopting and playing video games grows.

This translates into material rewards and personal advantage: higher profits (or rents) for owners and higher earnings (or other labour-market success) for employees in these complementary markets.

Along with games consumers themselves, these providers of complementary products (whose returns increase with the usage of video games) therefore have incentives to provide the games industry with ‘product defence’, flattery and boosterism. Thus they can be found disseminating cheerful claims that violent video games are neither a public-health threat nor morally objectionable.

Success really does provide its own justification. Self-conscious individual corruption is not necessary. Motivated belief formation (‘wishful thinking’, dissonance reduction or effort justification) is sufficient to persuade most people that whatever brings them rewards and a livelihood can’t be altogether bad.

The familiar dynamics of belief transmission in tightly clustered social networks then apply, with epistemic contagion ensuring that all members share credence in the safety of violent video games.

Increasing returns in the market for video games (and thence for related products) provide a scaffold for the propagation of beliefs about the soundness of the product.

In other words, there is no need for video-games firms to follow the example of tobacco firms. The latter had to seek out Reader’s Digest and persuade Edward R. Murrow to cease the damaging coverage of their product. In the presence of strategic complementarity, however, good press and favourable PR take care of themselves.

Ultimately the video games industry is tied to other sections of the media, information and entertainment industry — by direct threads of ownership, credit, cross-subsidy, and labour-market adjacency — in ways that did not apply for Philip Morris or Exxon.

Thanks to this relationship, there is a standing army of journalists, bloggers and opinion-makers who will reliably leap to the defence of games without needing to be bamboozled or force-fed talking points.

(See the scornful online article in Condé Nast publication Vanity Fair about Biden’s meeting: ‘Didn’t Tipper Gore resolve the “violent video games” issue shortly after she heard Prince for the first time, in 1985, and insisted on warning labels on CDs and game packaging? Apparently not.’)

Of course, it is true that tabloid TV programs, newspapers and talk-radio presenters do periodically suggest — usually following some mass shooting — that violent video games may have deleterious effects on their users or on society.

But so do they regularly rail against the greed of banks and the venality and corruption of politicians.

This never seriously threatens the continued existence or positions of the latter, any more than the commercial survival of a profitable branch of the entertainment industry is endangered by the feeble, short-lived denunciations of ‘old media’ commentators. (Such critical beliefs about e.g. banks, which find no outlet in the electoral system or within reach of any available levers of popular influence, are allowed only inchoate and limited expression. They may thereby be channelled into such useful directions as racism, scapegoating, etc., or leveraged for authoritarian or reactionary purposes, or deliberately stoked by one powerful group to win bargaining power over another.)

Most ‘anti-games’ media commentators, of course, are employed or paid by a firm that itself is a subsidiary of some conglomerate or holding company (Vivendi, Viacom, Disney, Time Warner, etc.) that also owns firms publishing, developing, marketing or distributing video games.

Traders in the language of ‘old media’ and ‘new media’ take their generational framework quite literally, as though novel industries within the consumer-entertainment sector must inevitably compete with and displace traditional and existing ones, much as each human generation must physically supplant that which it succeeds.

As I’ve shown, remonstrating against ‘moral panic’ has been deployed to great effect by Christopher Fergusonandothers to deter all criticism of violent video games.

The claim presented here (packaged in the language of 1970s ‘left-wing’ sociology) is that ‘old media’ entities are fogeyish cultural ‘authorities’ seeking to preserve their privileges. They are resistant to novelty, such as is found in ‘new media’ products like video games.

This argument is calculated to push all sorts of buttons and win a broad, ramified constituency.

The ‘knowledge economy’ rhetoric is chosen to win the allegiance of a self-identified ‘creative class’, which looks favourably upon new forms of entertainment, information and communications technology. Borrowing from the sociology of deviance, meanwhile, aims to attract ‘progressives’ who sympathize with the marginalized.

The result is a neat contrarian package, unassailable by anyone who considers themselves to be ‘sophisticated.’

But there is no reason games can’t merely supplement existing media, and became part of the asset portfolio of existing media giants (Activision, for example, is now a subsidiary of Vivendi, having been started during the 1970s as an independent company by disgruntled Atari games developers).

Indeed, due to the high fixed costs and low marginal costs involved in digital production and distribution, it seems inevitable that the sector should exhibit economies of scale and thus create barriers to entry. Its surviving firms are destined to become subsidiaries of (or to go on licensing intellectual property from) some conglomerate or holding company.

Few avowedly ‘progressive’ people have sympathy for such corporate media behemoths as Sony, Microsoft, etc. They may however be induced, by what Thomas Frank called ‘market populism’, to express enthusiasm for the venture-capital driven world of games.

In this sector, small and scrappy developers and start-up companies (and later small- and medium-sized enterprises) have few assets and thus are credit constrained.

These firms therefore rely on private equity finance from Silicon Valley. As shown above, they also feed money into the pockets of those large companies (oligonomies, to use Steve Hannaford’s term) that own the platform for independent content producers and the distribution system for customers (Apple’s iTunes, Amazon, Netflix, Rhapsody, etc., and the three big games-console makers, Sony, Nintendo and Microsoft), as well as to those that aggregate and allow mining of massive data sets to build fortunes from advertising brokerage (Google, Facebook).

This leads on to the third reason why the video games industry has not needed to rely upon centrally organized ‘merchants of doubt’, nor ‘astroturf’ through paid agents, to defend their products.

Consider the remark made by Georgia Tech professor Ian Bogost in The Atlantic about the visit by games industry CEOs to the White House for Biden’s meeting:

[Public] opinion has been infected with the idea that video games have some predominant and necessary relationship to gun violence, rather than being a diverse and robust mass medium that is used for many different purposes, from leisure to exercise to business to education… Truly, we cannot win.

‘We’, says the faculty member of a state university about ‘my colleagues in the games industry’.

There now are many humanities and social science scholars, faced with shrinking faculty budgets, stingy hiring policies and poor tenure prospects, who in desperation have hitched their wagon to a rapidly growing segment of the entertainment industry.

These academics have perceived a confluence of fortunes: as the games industry goes, so do they. As Bogost says, they naturally seek to acquire ‘cultural legitimacy’ for their medium.

An acknowledgement of video games’ good standing — as a respectable non-hazardous part of the culture, a ‘diverse and robust mass medium’, worthy of journal articles and monographs — is needed if these ambitious academics are to succeed in capturing a permanent seat at the table (perhaps fusing with cinema studies or even sitting alongside it as rough equals).

Therefore many of these scholars are obliged to defend violent games and to furnish the desired ‘no proof of harm’ arguments, come what may.

(Consider Texas A&M psychologist Chrisopher’s Ferguson’s comical attempt to argue against the well-supported hypothesis that violent games desensitize users to violence. The recently published study involved his student participants watching an episode of the programs Law and Order: SVU, Bones or Once Upon a Time, then failing to self-report reduced empathy when subsequently shown violent footage. Here we can add a corollary to the argument about the unique situation faced by defenders of video games in the academy. On average, peer review forms less of a barrier to publishing worthless or spurious results in social science or humanities journals than in the natural sciences.)

Many videogames scholars are precariously ensconced in academia: lowly adjuncts who receive no White House invitations. They are obliged to supplement their teaching income through paid work linked to the games industry (e.g. promotion, development, journalism).

Others, with stabler positions and savings to play with, can risk starting up their own firms (Bogost is one, though it seems unlikely to me that his above use of the collective first-person pronoun referred to this).

Such varieties of dependence and forms of extramural interaction create a commonality of both personnel and interests, tying the commercial success of a product to the scholarly work based on it. This increases feelings of affiliation. This sense of shared fate is not mistaken, and leads to unabashed scholarly apologetics for video games.

Amid the laudation, scope exists for some academics to engage in ‘criticism’ of certain aspects of the videogames industry and its products. But such reproaches are only of the nourishing, tough-love type that ultimately has the industry’s welfare at heart. Bogost’s encomium captures the general tone.

For all these reasons, there seems little requirement for videogames firms to orchestrate a subterranean ‘product defence’ by funding dedicated merchants of doubt. There are plenty of respectable and motivated people who perform the cheerleading task already as a sideline to their day job.

Now comes the final and perhaps most crucial reason why videogames firms have not had to spend more on paid agents and front groups to undertake a political defence of violent games (though expenditure on ‘government relations’ professionals is indeed enormous: the ESA typically spends more than $1 million per quarter on K Street lobbyists).

Therefore the fact that members of the country’s population are presented with large doses of realistically depicted violence as ‘entertainment’ — thereby being brutalized from early childhood — must prompt little concern, and provoke some pleasure, in ruling circles.

In a submission to a US Senate committee investigating violent media products, Thierer wrote:

Many people — including many children — clearly have a desire to see depictions of violence… Could it be the case, then, that violent entertainment — including violent video games — actually might have some beneficial effects? From the Bible to Beowulf to Batman, depictions of violence have been used not only to teach lessons, but also to allow people — including children — to engage in sort of escapism that can have a therapeutic effect on the human psyche. It was probably Aristotle who first suggested that violently themed entertainment might have such a cathartic effect on humans…

One might just as easily apply this thinking to many of the most popular video games children play today, including those with violent overtones…

This echoes Judge Posner’s opinion in the Kendrick case that: ‘To shield children right up to the age of 18 from exposure to violent descriptions and images would not only be quixotic, but deforming; it would leave them unequipped to cope with the world as we know it.’

In what Thierer called a ‘blistering tour-de-force’, Posner ‘[explained] how exposure to violently-themed media helps to gradually assimilate us into the realities of the world around us.’

But what did the eminent Posner mean by the ‘world as we know it’?

His 2001 judgement (on an Indianapolis ordinance banning ‘gratuitously violent’ games in arcades) had gone on curiously:

Now that eighteen-year-olds have the right to vote, it is obvious that they must be allowed the freedom to form their political views on the basis of uncensored speech before they turn eighteen, so that their minds are not a blank when they first exercise the franchise… People are unlikely to become well-functioning, independent-minded adults and responsible citizens if they are raised in an intellectual bubble.

So Posner’s defence of hyper-violent video games was that they mould the political views of children, making of them responsibile citizens ready to exercise political judgement. Lacking such inputs they would, apparently, make unreliable voters.

What callowness did games erase: what reality were children being prepared for?

Some idea may come from another submission to the same Senate commitee hearing on violent games, this one by David Horowitz, director of the industry lobby group the Media Coalition.

The impossibility of distinguishing “acceptable” from “unacceptable” violence is a fundamental problem with government regulation in this area. The evening news is filled with images of real violence in Iraq and Afghanistan routinely perpetrated by the “bad” guys. Often this horrific violence goes unpunished. It would be virtually impossible for the government to create a definition that would allow “acceptable” violence but would restrict “unacceptable” violence.

This description refers to Carmen M. Ortiz, a US attorney for the Justice Department, who handled the indictment of Aaron Swartz for allegedly accessing vast numbers of academic papers from JSTOR without authorization.

But, except for the case particulars about the Internet and IP, the description might also apply to Martha Coakley (Massachusetts attorney general and failed candidate for the US senate), Thomas Reilly (her predecessor, later beaten by Deval Patrick for his party’s nomination as gubernatorial candidate) and Scott Harshbarger (Reilly’s predecessor as state attorney general, losing gubernatorial candidate and ex-president and CEO of Common Cause).

The later three vaulted to prominence and sought higher office by railroading a family of Middlesex county day-care centre providers, in an infamous case alleging ritual child abuse, based on fantastic testimony elicited from children. (Such episodes of hysteria were common during the 1980s and early 1990s, when the mix of prurience, career opportunity and right-thinking sexual politics proved irresistible to some ‘progressive’ journalists, social workers, lawyers and psychologists.)

Ortiz thus has several forebears in the role of grubbily ambitious Massachusetts Democrat prosecutor. The habitual lack of probity displayed by such people follows, quite straightforwardly, from their professional incentives.

[When] the system isn’t working, it doesn’t make sense to just yell at the people in it — any more than you’d try to fix a machine by yelling at the gears… When there’s a problem, you shouldn’t get angry with the gears — you should fix the machine.

Of course, a society isn’t a machine, and the role of lawyers in it isn’t subject to tinkering (by whom?), corrective repair or gradual amendment.

In the contemporary United States, the social privileges enjoyed by elite members of the legal profession follow, in part, from an institutional evolution that took place long ago, transforming property rights, technology and the state.

The foundation of modern US tort law was bound up with changes to ownership rights, the development of mechanized industry and the status of juries and the bar. This transition was described by Morton Horwitz in his classic analyses of US law between the War of Independence and the Civil War.

As Horwitz described it, this period involved the ‘overthrow of eighteenth century pre-commercial and anti-developmental common law values’:

As political and economic power shifted to merchant and entrepreneurial groups in the post-revolutionary period, they began to forge an alliance with the legal profession to advance their own interests through a transformation of the legal system.

Decisive changes occurred over the question of water rights with the development of textile, paper and saw mills in New England, New York and Pennsylvania (the first being Samuel Slater’s water-powered mill in Pawtucket).

‘Under the Mill Acts, an owner of a mill situated on any non-navigable stream was permitted to raise a dam and permanently flood the land of all his neighbors, without seeking prior permission’:

[The] law of negligence became a leading means by which the dynamic and growing forces in American society were able to challenge and eventually overwhelm the weak and relatively powerless segments of the American economy. After 1840 the principle that one could not be held liable for socially useful activity exercised with due care became a commonplace of American law. In the process, the conception of property gradually changed from the eighteenth century view that dominion over land above all else conferred the power to prevent other’s from interfering with one’s quiet enjoyment of property to the nineteenth century assumption that the essential attribute of property ownership was the power to develop one’s property regardless of the injurious consequences to others…

Anticipating a widespread movement away from property theories of natural use and priority, they introduced into American common law the entirely novel view that an explicit consideration of the relative efficiencies of conflicting property uses should be the paramount test of what constitutes legally justifiable injury. As a consequence, private economic loss and judicially determined legal injury, which for centuries had been more or less congruent, began to diverge.

Water-powered mills, by compelling changes in the rights and obligations of property owners, also implied changes in the scope and nature of liability incurred by failure to uphold duties:

At the beginning of the nineteenth century there was a general private law presumption in favour of compensation, expressed by the oft-cited common law maxim sic utere. For Blackstone, it was clear that even an otherwise lawful use of one’s property that caused injury to the land of another would establish liability in nuisance, “for it is incumbent on him to find some other place to do that act, where it will be less offensive.”

In 1800, therefore, virtually all injuries were still conceived as nuisances, thereby invoking a standard of strict liability which tended to ignore the specific character of the defendant’s act. By the time of the Civil War, however, many types of injury had been reclassified under a “negligence” heading, which had the effect of substantially reducing entrepreneurial liability. Thus the rise of the negligence principle in America overthrew basic eighteenth century private law categories and led to a radical transformation not only in the theory of legal liability but in the underlying conception of property on which it was based.

Meanwhile the social position of lawyers and judges was elevated:

One of the most important consequences of the increased instrumentalism of American law was the dramatic shift in the relationship between judge and jury that began to emerge at the end of the eighteenth century. Although colonial judges had developed various techniques for preventing juries from requiring verdicts contrary to law, there remained a strong conviction that juries were the ultimate judge of both law and facts. And since the problem of maintaining legal certainty before the Revolution was largely identified with preventing political arbitrariness, juries were rarely charged with contributing to the unpredictability or uncertainty of the legal system. But as the question of certainty began to be conceived of in more instrumental terms, the issue of control of juries took on a new significance. To allow juries to interpret questions of law, one judge declared in 1792, “would vest the interpretation and declaring of laws, in bodies so construed, without permanences, or previous means of information, and thus render laws, which ought to be an uniform rule of conduct, uncertain, fluctuating with every change of passion and opinion of jurors, and impossible to be known till pronounced.” Where eighteenth century judges often submitted a case to the jury without any directions or with contrary instructions from several judges trying the case, nineteenth century courts became preoccupied with submitting clear directions to juries…

Juries were sidelined as certified legal professionals arrogated to themselves the exclusive right to decide on questions of law:

One of the phenomena that has most puzzled historians is the extraordinary change in the position of the postrevolutionary American Bar… In the period between 1790 and 1820 we see the development of an important set of relationships that made this position of [political and social] domination: the forging of an alliance between legal and commercial interests…

The leaders of the Bar in the period after 1790 are not the land conveyancers or debt collectors of the earlier period, but for the first time, the commercial lawyers…

[One] of the leading measures of the growing alliance between bench and bar on the one hand commercial interests on the other is the swiftness with which the power of the jury is curtailed after 1790.

Three parallel procedural devices were used to restrict the scope of juries. First, during the last years of the eighteenth century American lawyers vastly expanded the “special case” or “case reserved”, a device designed to submit points of law to the judges while avoiding the effective intervention of a jury…

A second crucial procedural change – the award of a new trial for verdicts “contrary to the weight of the evidence” – triumphed with spectacular rapidity in some American courts at the turn of the century. The award of new trials for any reason had been regarded with profound suspicion by the revolutionary generation… Yet, not only had the new trial become a standard weapon in the judicial arsenal by the first decade of the nineteenth century; it was also expanded to allow reversal of jury verdicts contrary to the weight of the evidence, despite the protest that “not one instance… is to be met with” where courts had previously reevaluated a jury’s assessment of conflicting testimony…

These two important restrictions on the power of juries were part of a third more fundamental procedural change that began to be asserted at the turn of the century. The view that even in civil cases “the jury [are] the proper judges not only of the facts but of the law that [is] necessarily involved” was widely held even by conservative jurists at the end of the eighteenth century…

During the first half of the nineteenth century, however, the Bar rapidly promoted the view that there existed a sharp distinction between law and fact and a correspondingly clear separation of function between judge and jury. For example, until 1807 the practice of Connecticut judges was simply to submit both law and facts to the jury, without expressing any opinion or giving them any direction on how to find their verdict. In that year, the Supreme Court of Errors enacted a rule requiring the presiding trial judge, in charging a jury, to give his opinion on every point of law involved. This institutional change ripened quickly into an elaborate procedural system for control of juries…

The subjugation of juries was necessary not only to control particular verdicts but also to develop a uniform and predictable body of judge-made commercial rules.

Not until the nineteenth century did judges regularly set aside jury verdicts as contrary to law. At the same time, courts began to treat certain questions as “matters of law” for the first time. …

By, 1812… in a decision that expressed the attitude of nineteenth century judges on the question of damages, Justice Story refused to allow a damage judgement on the ground that the jury took account of speculative factors that “would be in the highest degree unfavourable to the interests of the community” because commercial plans would be involved in utter uncertainty.” As part of this tendency, judges began to take the question of damages entirely away from juries in eminent domain proceedings… Finally, as part of the expanding notion of what constituted a “question of law” courts for the first time ordered new trials on the ground that a jury verdict was contrary to the weight of the evidence, despite the protest that “not one instance… is to be met with” where courts had previously reevaluated a jury’s assessment of conflicting testimony.

By 1820 the legal landscape in America bore only the faintest resemblance to what existed forty years earlier. While the words were often the same, the structure of thought had dramatically changed and with it the theory of law. Law was no longer conceived of as an eternal set of principles expressed in custom and derived from natural law. Nor was it regarded primarily as a body of rules designated to achieve justice only in the individual case. Instead, judges came to think of the common law as equally responsible with legislation for governing society and promoting socially desirable conduct. The emphasis on law as an instrument of policy encouraged innovation and allowed judges to formulate legal doctrine with the self-conscious goal of bringing about social change….

Thus, the intellectual foundation was laid for an alliance between common lawyers and commercial interests. And when in 1826 Chancellor Kent wrote to Peter DuPonceau about the arrangement of his forthcoming Commentaries, he underlined the extent to which he would pay attention only to decisions of the courts of commercial states…

As the Bar was molding legal doctrine to accommodate commercial interests… the mercantile interest for the first time was required to recognize the legal primacy of the Bar.

The historical lesson that technical innovations (e.g. development of the water-powered mill) sometimes bring changes in property rights (and thus alter the role of lawyers) has obvious contemporary relevance.

In 1996 the economist Kenneth Arrow discussed how technical features of information as a commodity had brought about innovations in property law (IP) to preserve the exclusive rights of owners.

He nonetheless suggested that technical innovation called into doubt the very future of an economy (capitalism) built on private ownership of capital goods, the employment of propertyless workers, and the interaction through decentralized market exchange of discrete production units (firms):

Once obtained, it [information] can be used by others, even though the original owner still possesses it. It is thus fact which makes it difficult to make information into property. It is usually much cheaper (not, however, free) to reproduce information than to produce it… Two social innovations, patents and cooperates, are designed to create artificial scarcities where none exists naturally…

The ability of information to move cheaply among individuals and firms has analogues with one class of property, called fugitive resources. Flowing water and underground liquid resources (oil or water) cannot easily be made into property. How does one identify ownership, short of labelling each molecule? … It is for this reason that water has always been recognized as creating a special property problem and has been governed by special laws and judicial decisions…

Let me conclude with some conjectures about the future of industrial structure. Information overlaps from one firm to another, yet the firm has so far seemed sharply defined in terms of legal ownership. I would forecast an increasing tension between legal relations and fundamental economic determinants. Information is the basis of production, production is carried out in discrete legal entities, yet information is a fugitive resource, with limited property rights.

Small symptoms of these tensions are already appearing in the legal and economic spheres. There is continual difficulty in defining intellectual property; the US courts and Congress have come up with some strange definitions. Copyright law has been extended to software, although the analogy with books is hardly compelling. There are emerging obstacles with mobility of technical personnel; employers are trying to put obstacles in the way of future employment which would in any way use skills and knowledge acquired in their employ.

These are still minor matters, but I would surmise that we are just beginning to face the contradictions between the system of private property and of information acquisition and dissemination.

On any reckoning, the Australian government’s militarized ‘border protection’ regime has now endured for over a decade. Initially viewed by journalists and decried by left-liberal critics as a mere electoral manoeuvre, to be extended or retracted according to the public mood or change of government, it has instead hardened into a permanent standing feature.

It has resisted disbanding or reform, despite widespread opposition and notable failure to achieve the ends publicly adduced for it. Maritime patrols have multiplied and detention camps become encrusted. How to explain their emergence and survival?

Most discussions neglect a crucial aspect. ‘Border protection’ is made possible, and appeals to Canberra, thanks to the recent spread of state jurisdiction over parts of international waters.

The latter development, which has allowed ‘privatization of the oceans’ and extension of ‘national security’ bailiwicks, was described in the previous post.

How exactly have legal arcana about fisheries management, by swelling Canberra’s maritime jurisdiction, led to the Australian government’s ‘new regime’?

Since the 1970s, the acquisition by states of limited jurisdiction and exclusive economic rights over adjacent coastal waters and extended continental shelves has meant growth in that portion of the world’s territory in which states can (practically speaking) restrict the movement of people.

Non-nationals have kept legal rights to innocent passage through these (international) waters, and vessels there remain under flag-state law.

But coastal states have gained implicit authority to regulate almost every other activity — including transit with the aim of arriving in a country to seek refuge there. (The explicit authority to prevent and punish infringement of domestic immigration laws begins outside a state’s territorial sea, in its contiguous zone.)

This has implied a paring back of the right to seek asylum from persecution.

This right had always, from the beginning, been heavily circumscribed and sparingly awarded, and was trumped whenever it was held to conflict with any other prerogative. The provisions of refugee law, according to one Australia High Court judge, do not impose any ‘limitation upon the absolute right of member States to regulate immigration by conferring privileges upon individuals… [No] individual, including those seeking asylum, may assert a right to enter the territory of a State of which that individual is not a national.’

‘Border protection’ policies have thus been a predictable result of circumstances in which the free movement of people conflicts with the sovereign right of states to determine who may enter and remain within their territorial borders — and in which state jurisdiction has been extended into places where it didn’t previously apply.

Creation of exclusive economic zones (EEZs) has endowed states with maritime interests of high strategic worth. These naturally have become matters of ‘national security’, to be preserved if necessary by the armed forces.

Within their respective EEZs, states have been obliged to place their coercive instruments at the service of locally owned firms, to pursue and protect these firms’ property claims in assets (fisheries, offshore oil and gas reserves, elevated platforms, drilling rigs, etc.) against interference, encroachment, seizure, expropriation or unilateral transfer of ownership.

No dramatic leap of logic or political principle has been involved, therefore, when it has been declared that EEZs, contiguous zones and territorial waters, those beleaguered redoubts of ‘national sovereignty’, should also be protected against ‘unauthorized maritime arrivals’.

Nonetheless the much-stated need to protect Australia’s vulnerable maritime approaches against ‘boat people’ has been a pretext which state leaders have deliberately used to pursue Canberra’s strategic objectives.

I’ll say a little more about the implications of this first point towards the end of this post, but will explain the second point firstly.

In 2004 Canberra announced creation of a Joint Offshore Protection Command (now Border Protection Command) comprising ADF and Customs personnel. It would be responsible for Operation Resolute, a joint patrol of Australia’s EEZ.

Along with these operations — centred on the energy-rich Timor Sea and the northwest coast abutting the Indian Ocean, off the Pilbara and Kimberley — the BPC was to oversee a Maritime Identification Zone.

The latter would cover all vessels passing within 1000 nautical miles of Australian coastline. It would oblige all vessels seeking to enter Australian ports, as well as those merely having strayed inside the Australian EEZ, to provide Australian authorities with information regarding location, speed, crew, cargo and course of transit.

International law provided no basis for imposing such requirements on foreign-flag vessels. The area involved stretched into the territorial waters of Indonesia, Papua New Guinea, East Timor, New Zealand and New Caledonia.

The strategic considerations underlying such policies are pointed to in the ADF’s 2012 force posture review. Canberra’s military planners note that, amid shifts in the ‘Asia-Pacific strategic balance and great power competition’, including Washington’s regional ‘pivot’, Australian forces must be prepared to take part in ‘coalition operations in the wider Asia-Pacific.’

They note that ‘securing sea lines of communication and energy supplies will be a strategic driver for both competition and cooperation in the Indian Ocean region to 2030, and Australia’s defence posture will need to place greater emphasis on the Indian Ocean, as indicated in the 2009 Defence White Paper.’

Defence Minister Stephen Smith spoke of developing a ‘force posture that can better support operations in our northern and western approaches, as well as operations with our partners in the wider Asia Pacific region and the Indian Ocean Rim.’

And what might such joint operations be?

In 2010, military strategists from the US Center for Strategic and Budgetary Assessments presented Pentagon planners with a ‘candidate’ air-sea battle campaign for use in ‘potential conflicts involving China that could arise in the Western Pacific.’

In the envisioned theatre-wide combat, US naval forces would focus on ‘high-priority’ anti-submarine, anti-surface, anti-missile warfare and area denial in the East China and South China seas.

Washington would depend on allies (with Japanese and Australian forces foremost) to engage in ‘distant blockade’ and interdiction against China-bound seaborne trade:

In the event of a protracted conflict, choking off Chinese seaborne commerce to the maximum extent possible would likely be preferred to conducting large-scale operations in China itself.

US and allied forces ‘could exploit the Western Pacific’s geography, which effectively channelizes Chinese merchant traffic’:

Traffic bound for China would be intercepted as it tried to enter the southern portions of the South China Sea, i.e., beyond range of most PLA A2/AD systems, from the Malacca, Singapore, or major Indonesia straits…

Australian and other allied forces would thus have three key tasks:

Securing “rear areas” by neutralizing any PLA units forward-deployed to such areas;

Establishing a “distant blockade” to interrupt Chinese seaborne commerce; and

Cutting off or seizing Chinese offshore energy infrastructure.

Australian equipment and personnel would be useful for such maritime interception operations ‘since they generally would not involve major combat, allied aircraft and ships too vulnerable for employment against the PLA’s A2/AD battle network… These forces would patrol key chokepoints in Southeast Asia as the central element in a distant blockade’:

Over the past several years, China has helped develop port facilities in places like Gwadar (Pakistan), Chittagong (Bangladesh), and Sittwe (Burma) that could be used for military purposes. It recently deployed naval forces off Somalia in conjunction with anti-piracy operations for the first time, and PLA officials have floated trial balloons about acquiring access to forward bases. It continues to wage vigorous “dollar diplomacy” with various statelets in Oceania that could eventually translate into access to facilities for military purposes. In short, China appears to be developing options for creating a network of overseas military bases stretching from Africa to Oceania. Such presence would be consistent with the actions of many other rising powers throughout history; however, it could have serious implications for the military balance and consequently for US security and the security of its allies.

Preserving a stable military balance under these conditions would necessarily require the United States and its allies to maintain the capability to neutralize PLA bases outside the Western Pacific. This would involve removing the threat of diversionary PLA operations.

Such peripheral operations could take some time to complete, given the large distances between theaters of operation. Still, the United States and its allies would enjoy two important advantages. First, assuming the US fleet controls the seas, allied forces could take the lead in many of these peripheral operations, with US forces in support. For example, Australia is the most powerful state near Oceania, and has highly capable military forces that could conduct operations to neutralize any small PLA forces in the region.

Strategists such as Ross Babbage (in 1988) have noted the convenient placement, for this purpose, of Australia’s Indian Ocean External Territories:

Christmas and the Cocos Islands could serve as convenient forward refuelling and staging points for aircraft and ships in the north-western approaches… [Access] to these territories would also extend Australia’s reach into the surrounding region for surveillance, air defence and maritime and ground strike operations. The islands could, in effect, serve as unsinkable aircraft carriers and resupply ships.

For public consumption, politicians cite the geographic location of Australian offshore oil and gas reserves and the proximity of ‘failed states’.

Refugee boat arrivals to the Cocos Islands, Ashmore and Cartier Islands and Christmas Island also provide a useful pretext for militarizing the portions of the Indian Ocean, Timor Sea, Arafura Sea and Coral Sea that fall within the Australian EEZ.

The transit of ‘boat people’ has granted Australian authorities a convenient and plausible reason to undertake patrols and inspections, place sensors, conduct surveillance and reconnaissance, engage in interception and forced boarding, detain crews and seize vessels in these areas.

Of course, Australia’s state leadership does not spell out publicly, before a mass audience, its strategic goals and its tactics for meeting them.

Nonetheless it sometimes, for various reasons, finds it necessary or expeditious to allow certain matters to appear, through reliable media conduits, ‘in front of the children’, if only to rouse electorates in their support.

One of the basic tasks of electoral politics (and its satellites in the media and academic worlds) is to mobilize and harness a mass constituency behind narrow elite objectives. In the present context, stoking of anti-refugee attitudes, among its other benefits, allows such a happy convergence of popular feeling with ruling-class aims.

Left-liberal critics of ‘border protection’ policies attribute their introduction to ‘perennial’ Australian popular chauvinism and anti-immigrant racism. In reality, public attitudes on such matters have no existence outside of their shaping by professional opinion makers, and exercise no independent influence on the initiation of state policy.

Thus the respectably ‘progressive’ concern for threatened whales and endangered southern bluefin tuna may help satisfy Canberra’s strategic purposes, in another region mentioned in the ADF’s recent force-posture review:

Increased pressure on resources may see interest in engagement in the Antarctic continent… Increased resources for relevant agencies, not just Defence, will be necessary to strengthen Australia’s presence in Antarctica and the Southern Ocean in the face of likely future challenges.

Or consider Australian 1999 military intervention in Timor-Leste, which various activist groups conceived as supporting local ‘self-determination’, and thus worthy of salute. This operation (repeated in 2006) secured maritime control over the deepwater Ombai-Wetar Straits, a vital avenue off the northern Timor coast for US submarines passing between the West Pacific (and East Asia) and the Indian Ocean.

The East Timor matter illustrates what concerns lie behind Canberra’s attitude to maritime law and seaborne traffic.

In 1973 the UN Convention on the Law of the Sea convened in New York, with US delegates holding the following strategic priorities:

Because of dependence on oil and other resources, and the need of the military to pass through and over straits and in zones of economic jurisdiction, one of the primary security objectives of the United States may become the achievement of working relationships with coastal developing states.

The U.S. Government maintains that the invulnerability of its nuclear missile submarines depends on their ability to pass through international straits submerged and unannounced. International agreement on a 12-mile territorial sea would place dozens of international straits under the “innocent passage” regime of the territorial sea unless the demands of maritime states for unimpeded passage are agreed upon. (The legal regime of “innocent passage” permits transit by all ships except those which threaten the peace, good order or security of the coastal state. The lack of a more precise definition has left coastal states in a position to determine for themselves what is or is not “innocent passage.”)

Five international straits have been identified as essential for passage by U.S. missile submarines: Gibraltar, Malacca, Lombok, Sunda and Ombai-Wetar. Two of these are too shallow for underwater passage, the other three are controlled by states with which the United States maintains good relations and working modus vivendi, and which have and probably will continue to permit passage for submerged U.S. submarines.

[…]

The navy has been concerned that the breadth of the continental shelf under national jurisdiction might limit the freedom of the United States to place listening devices off the shores of foreign countries.

[…]

In addition to the questions of transit through straits and submarine tracking, a third strategic concern is that zones of extended coastal state jurisdiction will curtail conventional naval operations.

It was declared ‘essential’ for the passage of US ballistic missile submarines between the Western Pacific (and Northeast Asia) and Indian Ocean that these straits be ‘controlled by states with which the United States maintains good relations and working modus vivendi, and which have and probably will continue to permit passage for submerged U.S. submarines’:

The two Indonesian straits, Lombok and Ombai-Wetar, might be closed to unannounced underwater passage of U.S. SSBNs in any case because according to Indonesia’s interpretation of the archipelago principle of enclosed waters, they are considered internal rather than international waters.

On the other hand, the United States seems to have a working arrangement with Indonesia for passage of SSBNs through its straits though the Indonesian government has argued that the archipelago principle does not infringe on innocent passage, it requires prior notification of transit by foreign warships and has raised questions about the innocence of supertanker passage because of the danger of pollution.

In spite of Indonesian jurisdictional claims, the United States maintains that the Indonesian straits are international. According to press accounts and Indonesian sources, however, the United States routinely provides prior notification of transit by surface ships and presumably (if only as a practical convenience) relies on some special bilateral navy-to-navy arrangement for submerged passage, consistent with the requirements of concealing the details of SSBN passage from foreign intelligence.

Although this modus vivendi is rather contingent, it satisfies America’s needs as long as an Indonesian government as friendly as that of Suharto is in power.

Such concerns were impressed upon the Australian prime minister, visiting Washington in 1976 after Jakarta had annexed East Timor. The Fraser government’s negotiating position at UNCLOS dutifully aimed to ‘bridge the differences’ between the United States and smaller littoral and archipelagic states.

And, with that, I’ll now finish this post by returning to the first point mentioned at its beginning.

The gradual postwar development of the international law of the sea (culminating in the 1982 UNCLOS), under which states have extended jurisdiction into their adjacent coastal waters, took place during the same decades as codification of international refugee law. As long ago as 1930, at the League of Nations Conference for the Codification of International Law held in The Hague, delegates addressed the issues of territorial seas and nationality laws.

This historical coincidence does not imply complementarity. People’s right to free movement conflicts with the territorial sovereignty of states, and with the latter’s jurisdiction over borders and immigration. The first right retreats when the other privilege is advanced, just as personal (citizen) rights and property rights generally move inversely.

In recent decades, the acknowledged right of individuals to seek asylum from persecution has been limited and rolled back by governing elites worldwide. Political leaders have each asserted their state’s pre-eminent authority to control who may enter and remain within the territories over which it holds jurisdiction. (From this follows matters such as the incarceration of asylum seekers during the ‘process’ of status determination.)

The notorious 2001 assertion by the Australian Prime Minister — ’We will decide who comes to this country and the circumstances in which they come’ — expressed both a positive fact and a normative position: the state has sovereign authority over its territorial borders, and can set limits to migration flows.

Though a chorus of left-liberal groups and bien-pensant commentators shrieked at Howard’s words, none of them ever voiced a fundamental objection to the notion that a state has the sovereign right to determine who can enter and remain within its territory, and can set restrictions on numbers and categories of immigrants.

Seeking to make the best of this principle, rather than rejecting it, these ‘progressive’ voices merely plead that the state’s decisions (on refugees and immigration) should be made in more ‘humane’ fashion. (Thus the Australian Greens have repeatedly insisted that an increase in Australia’s ‘humanitarian program’ for refugees and family reunions should be balanced by a reduction in the intake of ‘skilled’ migrants, whom they have described as ‘queue jumpers’.)

Similarly, as described in the previous post, the Greens and associated conservationist groups uphold Canberra’s contested jurisdiction over portions of international waters (e.g. Australia’s Antarctic EEZ). They merely suggest that this control could be exercised better, e.g. total allowable catch of various fish species should be set at a ‘sustainable’ level.

On the other hand, the principled position — for socialists and for even minimally ‘left-wing’ people — is that the world’s oceans and their resources are not susceptible of appropriation by any state or private party — and that territorial states are not entitled, by virtue of their jurisdictional claims, to restrict the free movement of people (whether that movement involves flight from imperial violence, national dismemberment and state breakdown, or the pursuit, in a world of wage differentials between regions of varying levels of development, of a decent life in a country with jobs, roads, schools and sanitation).

England’s Game Laws of the late seventeenth century prohibited ‘inferior tradesman, apprentices and other dissolute persons’ from ‘neglecting their trades and employments’ and presuming to ‘hunt, hawk, fish or fowl’.

The jurist Blackstone noted that one of the aims publicly adduced for the laws was conservation, or ‘preservation of the several species of these animals, which would soon be extirpated by a general liberty.’ But the statutes’ true purpose, he went on to say, was to prevent the landless lower orders from providing for themselves, independently of the market, by ‘pursuing, hunting and destroying’ game. These proscribed activities, where tolerated, had disruptive consequences:

[In] low and indigent persons it promotes idleness, and takes them away from their proper employments and callings; which is an offense against the public police and economy of the commonwealth.

Similar laws against gathering wood or picking berries deprived rural populations (long since ousted from family plots and open fields) of access to the remaining non-market sources of subsistence. Such measures thereby compelled those unendowed with assets, on pain of starvation, to hire out their capacity to work in exchange for a wage paid by propertyholders, or to otherwise rely on subvention or charity. The Black Act of the eighteenth century saw poachers executed for ‘doing injuries and violence’ to certain types of property, e.g. hunting deer or hares or extracting resources from trees, warrens or fish ponds.

Thus the king’s dominion over his imperilled deer and forests once helped to establish and solidify incipient capitalist property relations.

More recently, a similar purpose has been served by the declaration of marine sanctuaries for whales and other endangered creatures – following the creation of Exclusive Economic Zones, supposedly to prevent the ‘tragedy of the commons’ from depleting the scarce resources contained within. These have allowed enclosure of what previously was res nullius: international waters adjacent to a coastline but beyond any state’s territorial sea.

Just as occurred earlier with land’s terrestrial bounty, the sovereign’s claim over marine resources was a necessary first step, through which it became possible for a few private agents or entities to appropriate the commons as their exclusive property (while most others were thereby deprived of access or use). The expansion of national jurisdiction has also served the strategic goals of naval powers.

Since 1945 the high seas have gradually shrunk for most purposes besides navigation, with exclusive rights assigned and national bailiwicks extended over formerly open-access waters.

One of the Truman Proclamations of 1945 asserted ‘the long range world-wide need for new sources of petroleum and other minerals’, and ‘in the interest of their conservation’ declared ‘the natural resources of the subsoil and sea bed of the continental shelf beneath the high seas but contiguous to the coasts of the United States as appertaining to the United States, subject to its jurisdiction and control.’ (Offshore oil production in the Gulf of Mexico dates from 1947.)

Meanwhile the ‘urgent need to protect coastal fishery resources from destructive exploitation’ and depletion was the pretext used to ‘establish conservation zones in those areas of the high seas contiguous to the coasts of the United States wherein fishing activities have been or in the future may be developed and maintained’. In these zones ‘fishing activities shall be subject to the regulation and control of the United States’, while the ‘character as high seas of the areas in which such conservation zones are established’ was preserved for navigation purposes.

Rögnvaldur Hannesson’s Privatization of the Oceans shows how the assertion by states of territorial rights was needed before private agents could acquire exclusive property rights:

[The] oceans are, or were, the last great commons. No single state used to have jurisdiction at sea outside a narrow belt, which as late as the middle of the twentieth century was only three nautical miles wide. Without a wider national jurisdiction at sea, it is hard to imagine how an economic institution such as property rights could have developed for any but the most stationery fish stocks. People who still have not reached the age of retirement have in their lifetime witnessed a revolution in the international law of the sea, by which states have gained control over fish resources off their shores. In the wake of this we have seen exclusive individual rights of access to fish resources develop.

[…]

[The] fisheries are but the last of the common property resources to which private property rights have developed; recorded history tells of enclosures and clearances of common land…

[…]

The enclosure of the world’s ﬁsh resources began as an attempt by states with rich ﬁsheries off their shores to extend their jurisdiction over these areas and to clear away foreign ﬁshing ﬂeets. This development was enormously stimulated by the claims to exclusive national rights to offshore oil and ended in the establishment of the so-called exclusive economic zone [EEZ]. Without this jurisdictional framework it would not be possible to limit fishing except by agreement among an indefinite number of states, an outcome that is none too likely.

Earlier this year Christopher Costello, Leah Gerber and Steve Gaines proposed in a Nature article that the creation of tradeable permits presents a ‘market approach to saving the whales.’ Establishing property rights would allow sustainable harvesting of whales just as transferrable quotas were promised to do for fisheries, and in the same way that GHG-emission permits were said to make pollution abatement possible.

Anti-whaling and conservationist groups would presumably have recoiled in horror from this policy suggestion. But these groups themselves are helping to build the mare clausum in which such property rights may be established (and naval pre-eminence pursued).

In recent years, for example, the Australian Greens and the Humane Society have cheered a federal court ruling that the waters adjacent to Australia’s (internationally disputed) Antarctic territory constitute part of Canberra’s (unilaterally declared and widely contested) exclusive economic zone. Within these waters, according to the court’s finding, may be applied the provisions of the Commonwealth’s Environmental Protection and Biodiversity Conservation Act, enacted in 1999 under the Howard government. This decision meant that Canberra could legitimately enforce its domestic laws against non-nationals to whom flag-state jurisdiction had previously applied.

The principle that the world’s oceans are not susceptible of appropriation by any state or private party has been voided by technological advance and junked by all notable political actors, from governing elites to environmental activists. This involves several matters of deep practical significance.

In Privatizing the Oceans, Hannesson presents the excision of EEZs from the high seas as a matter of routine upward progress: an enlightened dissolution of the commons, of a type familiar from recent history, allowing the venturesome lurch of capitalist property forms into yet another new frontier. The division of the high seas between national jurisdictions, on this argument, achieves something like the erection of barbed-wire fences on nineteenth-century pastures and prairies. Delineating ‘well-defined’ property rights to the world’s oceans is just the latest application of capitalism’s universally efficient solution to the problems of scarcity and resource depletion.

In reality, the carving out of EEZs from international waters, by sequestering raw materials and partitioning markets between territorial states, is something of a regression to pre-1945 arrangements involving fragmented zones of nationally-based access and operations.

It’s well-known that the international legal principle of freedom of the high seas was advocated by Grotius just as the Dutch East India Company (along with English and French merchants and navies) sought to penetrate marine routes monopolized by Portuguese and Spanish traders. And in 1918 the second of Woodrow Wilson’s Fourteen Points was the demand for ‘absolute freedom’ of navigation outside territorial waters – something immediately rejected by the other great naval powers, eager to maintain their colonial privileges. The 1930 League of Nations Conference for the Codification of International Law, held in The Hague just as the world economy began fracturing into autarkic blocs, granted states legal authority over territorial seas, subtracted from the high seas: ‘A State possesses sovereignty over a belt of sea around its coasts; this belt constitutes its territorial waters.’

From 1945 the Atlantic Charter and postwar GATT allowed the US to break down the old international system of exclusive economic zones. The latter had of course been established during the high-colonial era, when the ruling great powers granted their firms sole rights of investment in colonized territory, with fractured markets protected against competitors by customs barriers. Such restrictive arrangements, which prevented ‘access on equal terms’ to the ‘trade and materials of the world’, were later forbidden by multilateral treaty agreements such as the WTO.

Yet the terms of the UN Convention on the Law of the Sea (to which most states had granted ‘customary’ recognition if not ratification by the 1990s) re-created just such discriminatory barriers. Under its provisions, coastal states are held to possess, within their EEZs, ‘sovereign rights for the purpose of exploring and exploiting, conserving and managing the natural resources, whether living or non-living, of the waters superjacent to the seabed and of the seabed and its subsoil, and with regard to other activities for the economic exploitation and exploration of the zone, such as the production of energy from the water, currents and winds…’

With respect to fishing:

The coastal State shall determine its capacity to harvest the living resources of the exclusive economic zone. Where the coastal State does not have the capacity to harvest the entire allowable catch, it shall, through agreements or other arrangements… give other States access to the surplus of the allowable catch.

The coastal State exercises over the continental shelf sovereign rights for the purpose of exploring it and exploiting its natural resources… The rights referred to in paragraph 1 of this article are exclusive in the sense that if the coastal State does not explore the continental shelf or exploit its natural resources, no one may undertake these activities, or make a claim to the continental shelf, without the express consent of the coastal state.

The previous absence of clear demarcation had led to international skirmishes like the Cod Wars. While attribution of rights and jurisdiction through EEZs now deters similar low-level conflicts, it also elevates contests into a winner-takes-all matter. In a world divided into territorial states – each with the power to claim as revenue a portion of the surplus product extracted within its borders by privately-owned production units selling goods and services for profit – such disputes become a cause for strategic conflicts that inevitably are militarized.

This is true, above all, for marine areas containing hydrocarbon reserves, once offshore and later deepwater exploration and production became technically feasible and profitable. Firstly, the amount of capital tied up in the fixed investments required for oil and gas production (especially offshore) is huge and has correspondingly long turnover times. The (political, diplomatic, price, etc.) stability required to make such undertakings economically feasible demands a nexus of oil industry and state leadership. Secondly, and more crucially, the indispensable strategic and military worth of oil (e.g. the possibility of wartime interdiction) makes maritime zones containing energy reserves into grand strategic prizes. They are worth the price of diplomatic incident and military standoff to attain (though again, mostly for public consumption, such conflicts are usually presented as arising from disputes over ‘sustainable harvesting’ of fishing stocks).

Matters concerning oil supply bring into relief the impossibility of a peaceful alliance of global states and propertied classes for the joint exploitation of the world. They also make plain the purpose and consequences of dividing the oceans.

For example, in the Sea of Okhotsk there is a small enclave of the high seas (the so-called ‘peanut hole’) surrounded on all sides by waters falling within Russia’s EEZ. During the 1990s Moscow proclaimed a moratorium on all fishing within the enclave (which was mostly conducted by Japanese-, Chinese- and South Korean-owned vessels). It then enforced the ban by staging military manoeuvres and surveillance, effectively excluding fishing fleets. Around the same time the Russian government signed a production-sharing agreement with various oil majors to allow offshore oil and LNG extraction; production off Sakhalin began in 1999. In 2006 Shell was forced to sell its stake in the consortium to Gazprom, after Moscow threatened to revoke operating permits, using environmental violations as a pretext.

Meanwhile the publicly-stated purpose of US maritime strategy is to employ military assets to ‘deter the ambitions’ of regional competitors:

Today, the United States and its partners find themselves competing for global influence in an era in which they are unlikely to be fully at war or fully at peace. Our challenge is to apply seapower in a manner that protects U.S. vital interests…

Expansion of the global system has increased the prosperity of many nations. Yet their continued growth may create increasing competition for resources and capital with other economic powers, transnational corporations and international organizations. Heightened popular expectations and increased competition for resources, coupled with scarcity, may encourage nations to exert wider claims of sovereignty over greater expanses of ocean, waterways, and natural resources—potentially resulting in conflict. Technology is rapidly expanding marine activities such as energy development, resource extraction, and other commercial activity in and under the oceans. Climate change is gradually opening up the waters of the Arctic, not only to new resource development, but also to new shipping routes that may reshape the global transport system. While these developments offer opportunities for growth, they are potential sources of competition and conflict for access and natural resources.

[…]

Credible combat power will be continuously postured in the Western Pacific and the Arabian Gulf/Indian Ocean to protect our vital interests, assure our friends and allies of our continuing commitment to regional security, and deter and dissuade potential adversaries and peer competitors.

From 1945 Washington became the dominant naval power in the Pacific Ocean. And now, invoking Wilson and Grotius, US diplomats such as Hillary Clinton routinely assert a ‘national interest’ in defending ‘freedom of the seas’ and unimpeded navigation in the region, especially in the South China Sea, Yellow Sea and Sea of Japan. Since 1979 the US Navy has conducted what it calls a Freedom of Navigation program. This involves practical demonstrations of might, whereby US military vessels deliberately detour into waters over which coastal states (such as China and Iran) assert a ‘security jurisdiction’ (i.e. in which they request prior notification of transit, and authorization for exercises, by military vessels). Washington asserts the right to conduct military surveys, manoeuvres and reconnaissance within the Chinese EEZ. Beijing rationally regards intelligence gathering within its coastal waters as preparation for armed conflict, and declares itself authorized to prohibit such activity as prejudicial to its security.

These practices betray the reality obscured beneath the rhetorical ploy. Washington – with the aid of its chief military allies in the Asia-Pacific region, Canberra and Tokyo – now plays the old role of the established European powers in the Atlantic, seeking through rampant bellicosity to maintain naval pre-eminence against a rising commercial and strategic competitor. Its partners seek to uphold Washington’s global reach, and thereby their own interests, against the expansion of Beijing’s regional naval prerogatives.

A document prepared for the Royal Australian Navy’s Sea Power Centre for maritime research presents a public version of Canberra’s objectives:

There are a number of ways in which an increasingly restrictive navigation regime internationally might affect Australian interests. First, ADF ships, submarines and aircraft might find their access to certain areas of the ocean and super-adjacent airspace becoming restricted or subject to unacceptable limitations. Prior entry notification, navigation on the surface for submarines, and the restriction of international straits and ASL are not currently permissible at international law, and would limit the ADF’s operational effectiveness throughout the region. It could also impede the transit of allied navies in times of heightened tension or armed conflict, also hampering the efforts of coalitions of which Australia is a part.

In fact, Canberra itself violates the UNCLOS on unimpeded passage through international shipping channels, having imposed a system of compulsory pilotage for movement through the Torres Strait.

Similar strategic objectives to those held by the US governing elite were at work when, in 2004, Canberra announced creation of a Joint Offshore Protection Command (now Border Protection Command) comprising ADF and Customs personnel. Along with patrols centred on the energy-rich Timor Sea and the northwest coast abutting the Indian Ocean, the BPC was to oversee a Maritime Identification Zone, covering all vessels passing within 1000 nautical miles of Australian coastline. This would oblige all vessels seeking to enter Australian ports, as well as those merely having strayed inside the Australian EEZ, to provide Australian authorities with information regarding location, speed, crew, cargo and course of transit. International law provided no basis for imposing such requirements on foreign-flag vessels. The area involved stretched into the territorial waters of Indonesia, Papua New Guinea, East Timor, New Zealand and New Caledonia.

Meanwhile Anthony Bergin and Sam Bateman from the Australian Strategic Policy Institute have described some of the strategic issues underlying Canberra’s claims to Antarctic territory, including its adjacent waters and extended continental shelf.

In such circumstances, by demanding the expansion of Canberra’s maritime jurisdiction outside its territorial waters, and by providing pretexts under which this bailiwick might be enforced by military patrol boats, the Australian Greens (and environmental activists) present the national state as having a ‘progressive’ mission in world affairs, as being (potentially) an instrument of the angels. This fanciful vision is possible because, regarding political divisions over matters concerning fisheries management in the Australian EEZ, the Greens obscure the underlying questions of property relations and imperial rivalry that dwell beneath superficial disputes over morality. They thereby contribute onceagain to bestowing Canberra’s regional ambitions, and its all-but-certain participation in future military conflict between nuclear powers, with a degree of popular legitimacy and a ‘progressive’ sheen.