[viaTime] Michael Kinsley has a very fine op-ed on stem cell research in Time Magazine today. His writing here is, as it often is, especially pithy.

"Stamping some issue as controversial can be a substitute for thinking it through. In the case of embryonic-stem-cell research, thinking it through does not require further study or commissions of experts. This is one you can feel free to try at home. In fact, thinking it through is a moral obligation, especially if you are on the side of the argument that wants to stop or slow this research.

"An embryo used in stem-cell research (and fertility treatments) is three to five days past conception. It consists of a few dozen cells that together are too small to be seen without a microscope. It has no consciousness, no self-awareness, no ability to feel love or pain. The smallest insect is far more human in every respect except potential.

"[F]or abortion opponents whose views fall anywhere short of fanatical absolutism, the answer ought to be easy as well: full speed ahead. To the nonabsolutist, it ought to matter a lot that restricting stem-cell research doesn't actually spare the lives of any embryos. That means the lives of real people desperately awaiting the fruits of stem-cell research are being weighed against a purely symbolic message."

Sunday, May 30, 2004

[viaSmall Times] Howard Lovy has written an excellent and provocative little piece entitled, "How to Fight Misinformation in Two Easy Words: Honesty, Imagination" in which he says a number of things that are music to my ears. Among them, the phrase from which I took the title of this entry: "[L]ike a grad student emerging from the lab after years spent gazing into a scanning electron microscope, the science of nanotech needs to develop some social skills." Priceless!

Lovy is talking about the frustration of nanotechnology researchers, advocates, and enthusiasts over disproportionate furors about buckyball toxicity, the goo bestiary, and other nano-bugaboos. But, as he points out, "[n]ew technology is always misunderstood – from man-made fire to man-made foods. I take the perhaps-naive view that not every American believes everything he or she reads. In our news-hungry society, those who are interested in learning about nanotechnology will take their information from a number of sources."

The frustrations with the media are of course understandable. Dana Blankenhorn commenting on Lovy's article a couple of days ago on his own Corante blog "Moore's Lore," reminds us that while for scientists even strong disagreement is a normal and useful part of the peer review process, for politicians a comparable level of disagreement can signal a dangerous divisiveness that may represent a real risk to their careers.

The role of the media in general is key in exacerbating this difference in the scientific as against the conventional partisan politics of accelerating technological development, because, at least for now, as Blankenhorn suggests, “Fear is the easy headline right now.”

That is to say, since the impact of rapidly emerging, radically powerful technologies is imperfectly understood, alarmism is a message that circulates very easily, but the impact of such an alarmism on the processes of developmental policy-deliberation are likely to be distortive in ways that exacerbate many of the problems at hand.

No wonder researchers go into panic mode whenever bad information threatens to brew up another costly controversy without proper cause! But Lovy wonders ultimately "[w]hy should government be an 'advocate' of nanotechnology in the first place? No democratic government is capable of distracting the public from nanotech’s potential downsides – real or imagined – for very long. A nanotech-enabled society is inevitable. The accomplishments so far of nanoscience and nanobusiness cannot be unlearned. To aggressively do battle with public perception is to invite counterattack in the form of louder protests."

As it happens, I think arguments from "inevitability" are usually overstated and misused in technology-criticism, but here the point is general and very likely true. Of course, what is not inevitable is just what forms of enabling-nanotechs will arrive when and with just what social consequences. But I think Lovy's sensible approach that researchers and advocates calmly admit to threats and uncertainties come what may is exactly the right way to assure best outcomes anyway.

His second point is also very appealing. He warns against elitist temptations to "blame the press" for the awkward public learning curve where nanotechnology is concerned. Rather than bemoan the "ignorance of the public," rather than bewail as "inevitably distortive" the mechanisms of the popular press he insists that nanotech advocacy actively embrace the public culture of technology criticism, with all its confusions, mess, enthusiasm, and fear.

"Nanotechnology," he writes, "independent of its development as a science, is spreading as a cultural idea and icon. This separate branch of nanotech – a little bit of fact and a whole lot of imagination – can be turned into a powerful force."

Saturday, May 29, 2004

An article entitled "The New Perfectionism" by Austin Dacey, published recently in Free Inquiry magazine, and available in the online version of the issue, is generating a lot of comment among the various "transhumanist"-types and their fellow travellers (of whom, if googling my name is any kind of accurate indication of such things, I am myself one).

I found the article very thoughtful and sympathized with my sense of Dacey's perspective, even if occasionally I found some of his formulations perplexing. Here are some comments about the article, and about the "transhumanism" he is discussing.

He opens his article with a question:

"Suppose you were offered a photographic memory, perfect pitch,
ultraviolet-spectrum vision, heightened disease resistance, customized
skin and eye color, and a one-thousand-year life-expectancy. Would you
accept? Now suppose you were told that by doing so you would cease to
be human. Would this make you less willing to accept? If you're like
me, you'll answer Yes to the first question and No to the second."

The thing I don't understand about this way of framing the issue of technological development and its impact on humanity is, just why should we accept the second question as relevant to the first one? Who is it exactly who would "tell us" just when our many separate choices to enhance or modify our memories, senses, disease resistance, gross morphology, or what have you must then be tantamount to "ceasing to be human"?

If these changes threaten to terminate our humanity, how can we be sure we didn't lose our humanity with contact lenses or pacemakers or penicillin or the invention of writing? This is something only we can tell ourselves, surely.

"I could stand the improvements, and if they make me more than human, so
what? If you answer Yes to the first question but say that leaving
humanness behind would actually make you more willing to accept, you
may be a transhumanist, the new breed of perfectionists who aim at
collective self-improvement through direct modification of human
nature."

Again, Dacey's formulation puzzles me a bit. Is "different from" always inevitably "more than"? Why should we figure a project of genetic, prosthetic, or pharmacological therapy, modification, or enhancement, as necessarily a matter of becoming "more than human," or "less than human" (depending on how "leaving our humanity behind" is supposed to be read)?

It seems to me that "humanness" is an open-ended concept that we are collaborating on together, not something that many people would want to "leave behind" -- even those who seek to modify their capacities or morphology.

And what is it about the idea of using technology to engage in private practices of modification and re-invention that would make one think of "perfectionism," rather than simply self-creation and pluralism?

There seems to me no way that a rhetoric of "leaving behind humanity" would not seem to denigrate and so threaten the humans one would presumably "leave behind." Further, it is too easy for any doctrine of "perfectionism" to turn into and be read as a doctrine of prescriptionism and chauvinism.

I agree with Dacey that these versions of "transhumanist" sensibility are quite troubling. I wonder, though, whether there aren't after all far more appealling ways to think about what transhuman-type technology advocates and critics are up to, ways that might be foreclosed from view by his more suspicious formulations.

Definitely I disagree with him when he suggests there is something more radical inherently in the kinds of tools that interest "transhumanist"-types than the tools that have long been available to educators and disciplinarians for re-shaping the narratives of human selves.

But when Dacey turns his attention to the more ethnographic description of the actual communities of people who are likely to be self-identified "transhumanists," then his critique seems to me often very incisively on-target. He writes:

"One obstacle to discussion is that transhumanism is not just a
philosophy; it is also a grassroots movement."

Self-identified "transhumanists" should pay close attention to this observation. I think it is right, and worse, that to the extent that these "transhumanists" are primarily enthusiasts of and advocates for certain developmental outcomes in emerging technologies, they gain little for what they lose when they conceive of themselves as an "identity movement" of all things.

I do not agree that there is enough that is shared among people drawn to transhumanist formulations about technology development (either in understanding its broader human significance, or in the various specific projections about technological futures that preoccupy most radical futurists) to
coalesce as a coherent as well as a unique "movement."

Identity movements are so twentieth century, anyway, really.

Tell me just what transhumanists imagine they gain by thinking of themselves as a "movement" instead of, say, a network?

I would define "transhumanism" as one among a variety of post-humanist discourses. As a critical sensibility transhumanism is largely a suspicion of the normative and ideological claims that are made in the name of "nature," a suspicion inspired by an awareness of the destabilizing impact of technological development on what are widely taken as natural limits. As a programmatic sensibility, transhumanism is the hope that genetic, prosthetic, and cognitive modification can be paths of human self-creation, and that when it is regulated to ensure a fair distribution of costs, risks, and benefits, technological development is an emancipatory force.

To see transhumanist sensibilities and critical vocabularies incubating conversational and organizational networks seems to me considerably more productive than wasting energy policing conformity among a "membership" to provide some sense of shared-idenity or "belonging."

Transhumanism as a "movement" is a palpably cultish cul-de-sac. Transhumanism as a constellation of networks in broad affinity with one another can be a significant and useful force for good.

Dacey goes on to drive this point home, when he suggest transhumanism as a "movement"

"gathered force in the last ten years and coalesced around
organizations like the Extropy Institute, the online magazine
BetterHumans, and the World Transhumanist Association, is a motley
crew of serious academics, journalists, and scientists, cyber
self-help gurus, nanotech venture capitalists, polyamorists and
gender-benders, cryonics freaks, and artificial intelligence geeks.
Like other iconoclastic movements, organized transhumanism attracts
its share of sheer goofiness. The co-founder of Extropy Institute, a
Southern California body-builder and Ayn Randian named Max, had his
last name changed from O'Conner to More, because I was going to get
better at everything, become smarter, fitter, and healthier. The
co-mingling of serious theory and policy consideration with a grab bag
of techno-utopian projects makes for easy targets for the biocons,
diverting the debate from core substantive issues."

All this seems to me exactly right. First, the "goofiness factor" (and this is a kindly description) of the Randroid/Libertopian/Apocaloid elements in popular technophile cultures will forever marginalize this brand of "transhumanism" qua organized movement. Second, the concentration on distant quasi-transcendent projections of "superlative state" technology among "movement-transhumanists" over a more pragmatic concentration on proximate developments gives bioconservatives perfect targets to mobilize ignorance and fear in support of luddite bans, precisely the outcomes "transhumanist"-types would presumably most abhor.

The article goes on:

"It is bad philosophy to identify the human essence with the human
genome in its present state. To do so is to buy into the antiquated
notion that a creatures nature is immutable or unchanging."

Again, this seems exactly right -- and it also makes me think Dacey likely sympathizes with at least some more reasonable versions of "transhumanist" technology advocacy and critique.

"The hard task for transhumanists, then, is the one they havent yet
taken head-on: making a positive and widely appealing moral case for
their particular vision of the excellent person and the good society."

Central to making any widely appealing moral case of this kind would surely require that "transhumanisms" show themselves to be (1) compatible with in fact indefinitely many particular moral visions, to be (2) concerned with how communication and collaboration will still be possible in a world of techno-constituted plurality, and also how (3) emerging technological developments challenge even those of us who would continue to value what are now normatively "human" limits to understand those limits and their possiblities in new ways.

...about the "search technologies" which, according to the quick self-description offered up by "Ads By Google," presumably ensure that "ads you see [for example, on the banner above] are related to the information you are viewing." I wonder what in my blogroll of socialist-feminist bioethicists, lefty academics, progressive technophiles, ABBs, and radical democrats, not to mention the insistently progressive bent of the action-alerts and topics I post about here, just what on earth it is that continually cues up all the banner ads for the RNC and Cheney-Bush on my blog? (No doubt mentioning the issue will itself just exacerbate whatever algorithmic skewage is producing this PR sewage.) Is it that the default culture of American technophilia really is just so conservative/market fundamentalist in general that tech-talk conjoined to partisan politics simply trips the conservative joy-buzzer automagically?

[Village Voice, found via Weblogsky] An excellent beginning to a promised three-part discussion of "conspiracy theory" by novelist-critic Gary Indiana (who happens, along with Bruce Sterling, to be among my very favorite living writers). Especially nice is his comment on the way corporate media "reflexively dismiss the most obvious or credible explanations for ugly phenomena as the perfervid fantasy of 'conspiracy cranks' — for instance, the idea that successive "preemptive" wars might be launched against demonized enemies in order to award reconstruction contracts to corporations formerly helmed by, say, the vice president of the United States and other exalted government employees, or that the strategic purpose of one such war might be the economic colonization of former Soviet republics rich in oil and mineral resources, and to guarantee a secure pipeline for the exploitation of said resources. Instead, the altruism and democracy-spreading goodness of the American power elite are portrayed as self-evident, taking all other motives off the media table."

[via the Electronic Frontier Foundation]"The PIRATE Act (S.2237) is yet another attempt to make taxpayers fund the misguided war on file sharing, and it's moving fast.

"The bill would allow the government to file civil copyright lawsuits in addition to criminal prosecutions, dramatically lowering the burden of proof and adding to the thousands of suits already filed by record companies. It would also force the American public to pay the legal bills of foreign record companies like Bertelsmann, Vivendi Universal, EMI, and Sony. Meanwhile, not a penny from the lawsuits goes to the artists."

[viaThe Center for American Progress]Today at 3:30 pm, the White House's senior health policy advisor Doug Badger will answer questions submitted to the White House's web site (http://www.whitehouse.gov/ask/question.html) . According to the 11/19/02 edition of National Journal, Badger came to the White House after representing drug industry giant (and major Bush campaign benefactor (more here) Eli Lilly &; Co. as a lobbyist.

Some suggested questions for Mr. Badger:

As one of the architects of the new Medicare law, did your close personal connection to one of America's biggest drug companies help make the final bill a financial boon to drug companies that shafts seniors?

Why did the president not explain his proposed cuts to health care (more here)?
Why does the president promise seniors major savings from Medicare drug cards when he knows there is no guarantee of any savings (more here)?

Why is the president telling people his new health savings account proposal will save them money, when studies show these plans will drive up deductibles for average workers (more here) and could cause more than one million people to lose their existing health coverage (more here)?

Wednesday, May 26, 2004

Marshall Brain writes today over in his Robotic Nation blog: “[C]onventional wisdom says that, as robots take over jobs, the increased productivity should be good for everyone. With robots taking the mundane jobs and increasing productivity, people should make lots more money.”

He proposes, however, that a recent article from USAToday, paints a different picture, when it points out:

"Since Bush took office, nearly 700,000 manufacturing jobs have disappeared from the region [ie, "the crescent of states stretching across the Great Lakes and down the Mississippi River"]. Many are gone forever, outsourced to places such as China and Mexico where labor costs are lower. Even if the jobs are replaced, the salaries and benefits often aren't. 'The jobs that are replacing them are much lower wage,' says Jeff Collins, a business professor at the University of Arkansas."

“Manufacturing,” continues Brain, “is, right now, the most robotic area of the economy. As the service sector turns robotic, we should expect to see this same downward spiral there as well... [M]ost of the "normal" jobs head toward minimum wage instead of getting better as the robots arrive. Instead of the wealth created by robots spreading out to everyone, it concentrates in the wealthy, since the wealthy own the robots.”

This is an argument Brain has made in several places, among them, notably, an essay entitled Robotic Freedom.

Mundi Collaborator and Friend James Hughes neatly summarizes Brain’s recommendations for coping with these dilemmas in an excellent and sympathetic column of his own, “Getting Paid in Our Jobless Future."

“Brain," writes Hughes, "argues that every American adult should receive an annual income of US$25,000 from the federal government. This would save capitalism by keeping the demand-side strong, eliminate poverty, establish general economic security and give people the leisure to spend long years in school learning to do something that robots can't.” Hughes goes on to expound his own case for a Basic Income Guarantee (BIG) as a way to disseminate rather than concentrate the enormous new wealth created by increasing automation in another column of his, "Embrace the End of Work."

Here is the very clear-eyed and compelling conclusion of Hughes’ argument in that piece:

“In the end, business would rather invest first in people in the developing world and then infotech and robots rather than expensive human workers in the developed world. Because of this, wages in the developed countries will fall in competition with the lowest wage human competitors around the world, and then in competition with the increasingly inexpensive robots and expert systems. Jobs will disappear, wages will fall and we will face three choices: Luddism, barbarism or basic income.

"The Luddites might win a temporary battle here and there, delaying one or the other labor-saving device or innovation. But in the end they will lose, and the technologies will come. Then the question will be what happens to the displaced, and to the economy.

"Without an expanded social wage (benefits and income from the government) in general, and a guaranteed basic income in particular, we face widespread immiseration, economic contraction and polarization between the wealthy, the shrinking working class and the structurally redundant.

"Or we can avoid this bleak future by re-embracing the techno-utopian vision and consciously striving to shrink working life by reducing the work week, mandating paid vacations, raising the minimum wage, improving workplace protections and providing health insurance and a basic income as a right of citizenship. All these policies will make human labor more expensive and investments in automation increasingly attractive. Employment will shrink, social wealth will grow and be shared more equally, and we can start rejoicing instead of despairing about the end of work. As Marshall Brain says, humanity can go on a permanent vacation."

[viaMany 2 Many] An interesting aside from a post by Clay Shirky on “Weblogs and Authority” today: “I’ve always thought of the difference between blogrolling someone vs. linking to them in a post as the difference between shouting out to someone on the cover of a rap album vs. actually sampling them.”

[viaEFF Deep Links] A group of influential Republican senators have introduced a bill to make permanent the civil liberties-corroding provisions in the USA PATRIOT Act that were sold to the public -- and to Congress -- as temporary measures. These provisions are among the most controversial in the Act, and for good reason: they represent an extraordinary assault upon our most basic rights as U.S. citizens. The fact that they expire is one of the only checks on this assault.

For details, check out the current issue of Secrecy News newsletter and Let the Sun Set on PATRIOT, an EFFector series explaining in plain language what's wrong with the sunsetting provisions and why they should expire.
Later: Timothy Edgar of the ACLU: "Congress had the foresight to make temporary some provisions of the hastily enacted Patriot Act. It is extremely premature to make these provisions permanent when Congress has not conducted thorough oversight on how the Act has been used and what safeguards can be included to protect civil liberties."

[viaMoveOn PAC] One of President Bush's most effective tactics has been to portray those of us who oppose his policies as a small minority of the American public. But poll after poll shows that a large and increasing majority of American citizens disagree with his policies and don't like the direction he's leading our country in. Bush's strategy is to keep us from recognizing just how many of us there are.

That's why we're launching a campaign to demonstrate how popular opposition to George Bush really is. To participate, all you need to do is request a free bumper sticker and stick it on your car or in some other highly visible place. We're willing to send one bumper sticker for free to anyone who wants one – no cost whatsoever, no strings attached. We're also offering 10-packs and 500-packs for a small contribution.

[viaDemocracy for America] In order for your vote to affect the outcome of this election, it must be counted. As November nears we must act now to ensure that our voting systems produce accurate and verifiable results.

Right now, some states are planning to use machines that will not allow voters to verify their choices. This means that any flaws in the machine or software will never be caught -- and no recount will be possible. And the head of the largest e-voting machine company -- who is a major contributor to George Bush and has promised to deliver Ohio to him -- asks that we just trust him.

Today we call on Congress and the states to require any electronic voting machine used in this election to produce a paper trail -- one that allows voters to verify their choices and officials to conduct recounts. Add your name to the call for accountability: http://www.democracyforamerica.com/verify

We will deliver the petition to Congress and the secretaries of state of every state planning to use electronic voting machines.

Please forward this message to everyone you know who wants to see that every vote is counted this year. You can also spread the word by using our grassroots action center, Organize for America, to invite your friends to sign the petition:
http://www.democracyforamerica.com/verifyinvite

Casting a vote is the most fundamental action we take as citizens. But voting is not a symbolic act -- the last presidential election demonstrated that every vote matters. Our responsibility in the months before November is to ensure that this time, every vote will be counted.

Monday, May 24, 2004

Well-meaning and reasonable persons wandering for the first time into electronic discursive spaces where radical technological developments like molecular nanotechnology or genetic, prosthetic, and cognitive modification medicine are seriously contemplated and debated, need to be prepared for repeated and unexpected encounters with belligerent young American males (mostly) who will berate them from a perspective they describe as "libertarianism."

There has been a welcome diminishment of this sort of thing since the height of the “irrational exuberance” of the so-called “dot.com era” of American technology enthusiasm in the 1990s, when stubbornly insistent delusions of an indefinitely prolonged “Long Boom” filled the pages of WIRED magazine and California “Extropians” declared war on both death and taxes -– the one via superlative digital and biomedical technologies, the other via the “spontaneous order” of market triumphalism.

But the dream remains alive more stubbornly and with altogether more self-assurance than one might otherwise expect, from eager online salons of “dynamists” who espouse via neologism the familiar combination of free-market politics and unregulated technological development championed by Virginia Postrel (the editor from 1989 to 2000 of the American market libertarian Reason magazine) in her book The Future and Its Enemies, to the popular online technology magazine Tech Central Station which publishes under the banner, “Where Free Markets Meet Technology.”

Libertarianism in this idiosyncratic, “anarcho-capitalist” denotation tends to have three primary characteristics:

First, these curious market-fundamentalist libertarians take an appealing commonsense Millian (or, I suppose, even more broadly, “Golden-Rulian”) commitment to a general Non-Initiation of Force as if it represented a kind of axiom, and then treat that axiom as the foundation from which one might then exhaustively characterize a just, stable, and prosperous social order.

Because the non-initiation principle delineates an essentially negative concept of liberty, I routinely describe these figures as “negative libertarians.” One could usefully distinguish, for example, purely negative libertarians from civil libertarians for whom a “positive” conception of liberty is necessary to affirm what is valuable in a human rights culture, or in the support of civic institutions like a separation of church and state, an independent press, vibrant and widely accessible education and so on. (My use of the terms “negative” and “positive” here is derived from the canonical formulation by Isaiah Berlin.)

Second, negative libertarians will thereupon tend to reduce all conceivable political and public relations to contractual relations (as against acts of force or fraud which they will identify as criminal and so anti-political, or acts of love, familial obligation, or generosity which they will tend to privatize and domesticate as intimate or charitable and "hence" pre-political, or simply not-political).

Third, negative libertarians will tend to identify the outcome of whatever they apprehend as a proper market exchange as always both the most optimally efficient and optimally fair or just, or at any rate the most practical and defensible, outcome on offer. Of course, what actually counts in the world as a “market” outcome is in fact profoundly contingent historically and territorially, and depends on a context of agreements, protocols, implicit and explicit norms, and so on. But technophiliac market libertarians very widely seem to conceive of market orders as spontaneous and universal upwellings out of what is deeply and immutably calculating and acquisitive in human nature as they conceive of it, or as if emerging from the sloppily sloshing tidal forces of supply and demand treated as deeply and immutably analogous to physical principles like the Laws of Thermodynamics.

Because of their stubbornly provincial misreading of contingent generalizations from the market conditions that prevail in their own neighborhoods as if they delineated eternal principles, I will sometimes describe these negative libertarians likewise as “market naturalists.” It is among the many ironies of the apparently irresistible allure of market naturalism among negative libertarian technophiles, that many of these ideologues otherwise cultivate a profound suspicion of deployments of the idea of “nature” to justify customs, institutions, or norms -- especially whenever the deployment of such customary putatively “natural” intuitions would inhibit an embrace of or access to emerging technologies.

Now, against the purported spontaneity and inevitability of “market” relations, so-called, market libertarians typically array what they take to be the countervailing and always-only coercive machineries of national states. All governance, and all the conduct of government representatives, is reduced to its “essence” as an expression of Weberian state coercion and so the market libertarians tend to discern in governing nothing but monotonously reiterated acts of violence and repression. From there, they then declare, practically as a matter of fiat, that “market outcomes” (and typically market behavior will be treated as synechdochic with corporate conduct) are always-only non-coercive.

Never mind that extraordinarily many real-world corporations, of course, routinely use physical threats and engage in exploitation and deliver harm in the effort to improve their bottom lines. And never mind that legitimate governments, of course, whatever their flaws, routinely enagage in social administration that is the farthest imaginable thing from physical threat. Once one puts the negative libertarian blinders on every nice social worker and dedicated public servant suddenly becomes a jack-booted thug and every corporate titan, even if he is little better than a mafia don, suddenly becomes a Randian Archetype of boundless dynamism and benevolent creative energy.

Minarchists and neo-classical liberals will for the most part affirm all three of these three planks as their own worldview, but for whatever reasons, will compromise their applications in certain key areas, usually on utilitarian or strategic political grounds. Typically these compromises are experienced as exceptions that prove the rule rather than deep challenges to the overall correctness of the negative libertarian viewpoint.

While the coterie of technology enthusiasts who espouse market fundamentalism in an undiluted form remains in fact a vanishingly small one (though unbelievably noisy for its scale), it is key to recognize the extent to which the more “mainstream” neo-liberal and neo-conservative practical and institutional universe, with its incessant drumbeat for deregulation without end, its lust for “market discipline” for the poor and military-industrial welfare entitlements for the rich remains importantly (and unfortunately) continuous in its assumptions, in its sense of the problems at hand, and in many of its aims with an extreme “market fundamentalist” negative libertarian world-view this mainstream would presumably and properly explicitly disdain in practice.

Of course, quite a few people will affirm the appeal of a non-aggression pact in some form or other, but I think few would go on to affirm its adequacy as a self-evident axiom on the basis of which one might erect an adequate social order. “Non-initiation of force” is a purely negative conception that will rely for its intelligibility and force on all sorts of implicit (some of them likely disavowed) positive conceptions of what constitutes initiation in the first place, what counts as force, what is and isn't violation, and a whole host of assumptions about what all of this is good for. Hence, for many people, defenses of individual autonomy and deep suspicions of authoritarian concentrations of power will be complemented by equally foundational defenses of a need for fairness, say.

Most people are likewise sensitive to the ways in which many so-called “market-exchange” outcomes in particular will often seem profoundly improper in fact, that they can occur under conditions of duress that the beneficiaries of an exchange can readily rationalize away while the losers have relatively little room to protest the outcome. And in any case, few would claim it is even possible to characterize actual contract-making and contract-adhering behavior exclusively in contractual terms, let alone adequately capture all of the complex, unpredictable, often unconscious political relations in which they are enmeshed through the figure of explicit contractual agreement.

If it really is true that the debate between markets and central planning was concluded in the twentieth century, it seems to me that something uninspiring like “regulated markets” were the verdict of that debate. And since there has never been, nor could there ever be a “pure” market against which one properly arrays an alien and antithetical force of regulation, it seems the time has come to describe the principle of market regulation itself as the norm rather than always as a compromise of a market ideal that does not exist and hence cannot function as a norm.

The modern “liberal” state, whatever its deficiencies and whatever occasional pretensions to the contrary are voiced by those it most empowers, is simply not a straightforward sovereign state in that its powers are not exercised unilaterally. Regulation is always already multilateral in the modern state, contested through a rough-and-tumble separation of powers at the state level and further diffused through the competing demands of diverse civic, cultural, media, business, and consumer interests. To a significant extent broadly liberal, imperfectly democratic hegemony seems to recuperate and so tolerate resistances. Given these complexities, the market libertarians seem to me to be enraptured by models of power, authority, consent, autonomy, and exchange that were already hopelessly simplistic by the nineteenth century, let alone the twenty-first. No doubt this accounts for an important measure of their allure.

We can all easily agree that coercion is wrong. We can all agree that many of the sources of coercion and exploitation inhere in human nature, such as it is, and probably we can agree that conspicuous asymmetries will invite exploitation and abuse. The liberal state seeks to diffuse the ineradicable violence and risk of coercive governance through competing state apparatuses and the multilateral institutions of civic society. Negative libertarians simply define coercion out of existence by declaring "market" outcomes as non-coercive by fiat. Liberals recognize the abuses of our system as is, but seek to ameliorate coercion through reform, while market naturalists seem stubbornly wedded to their word-magic and pie-charts.

To what can we attribute the ongoing allure of the sadly sociopathic libertarian imaginary, especially to American technophiles? Perhaps it is a matter of technical-minded people who prefer the clarity of reproducible results to the ongoing and unpredictable reconciliation of contending ends among the multiple stakeholders to social problems. Perhaps it is a matter of the elitism of the highly educated or the early adopters, or the more straightforward elitism of people who believe that they are innately superior and hence will always be among the winners in any outcome where there are winners and losers. Perhaps it is simply the commonplace disavowal by the privileged of the extent to which individual accomplishment inevitably depends on the maintenance of social norms, enforced laws and material infrastructure beyond itself.

Lately, I have begun to suspect that at the temperamental core of the strange enthusiasm of many technophiles for so-called "anarcho-capitalist" dreams of re-inventing the social order, is not finally so much a craving for liberty but for a fantasy, quite to the contrary, of total exhaustive control.

This helps account for the fact that negative libertarian technophiles seem less interested in discussing the proximate problems of nanoscale manufacturing and the finite and problematic benefits they will likely confer, but prefer to barrel ahead to paeans to the "total control over matter."

They salivate over the title of the book From Chance to Choice (in fact, a fine and nuanced bioethical accounting of benefits and quandaries of genetic medicine), as if biotechnology is about to eliminate chance from our lives and substitute the full determination of morphology -- when it is much more likely that genetic interventions will expand the chances we take along with the choices we make.

Behind all their talk of efficiency and non-violence there lurks this weird micromanagerial fantasy of sitting down and actually contracting explicitly the terms of every public interaction in the hopes of controlling it, getting it right, dictating the details. As if the public life of freedom can be compassed in a prenuptual agreement, as if communication would proceed more ideally were we first to re-invent language ab initio (ask these liber-techians how they feel about Esperanto or Loglan and you will see that this analogy, often enough, is not idle).

But with true freedom one has to accept an ineradicable vulnerability and a real measure of uncertainty. We live in societies with peers, boys. Give up the dreams of total invulnerability, total control, total specification. Take a chance, live a little. Fairness is actually possible. Justice is in our reach. Radical technological development regulated to ensure that costs, risks, and benefits are all fairly shared can emancipate the world. Liberty is so much less than freedom.

II. Spontaneous Order on the Left

“The Internet is antithetical to commerce.”

With this declaration, science fiction novelist and technology writer Cory Doctorow began an editorial essay for the O’Reilly Network (the online home of the key publisher of technical computer books and manuals as well as an organizer of important conferences on media and technology issues) in December, 2001. His next sentence was an epic exhalation of pent up frustration and nervousness: “There, I said it.”

I can well understand his exasperation, as well as his palpable relief at finally pronouncing his verdict.

Contemporary especially American technocultural, technofuturist, technophiliac rhetorics sometimes seem fantastically fixated with markets. I have already described an anarcho-capitalist libertarian viewpoint for which market relations are imagined to be uniquely expressive of a competitive, acquisitively maximizing "human nature," for which the sum of these relations is imagined to constitute the space of freedom figured as a "spontaneous order," and for which the principal emancipatory demand that compels the just is for the elimination of state regulations that are uniquely imagined to restrain this order from its otherwise inevitable crystallization. This deregulatory demand is typically figured as a radical privatization of the institutions of civic life hitherto associated with the public sphere.

The key contribution of technophiliac free-marketeers to this libertarian discourse would appear to be the regularly reiterated proposal that some particularly disruptive emerging technology or other –- it might be digital networks, or encryption technologies, or surveillance devices, or virtual reality systems, or intelligence-enhancing or virtue-enhancing neuroceuticals, or molecular manufacturing tirelessly replicating cheap goods at the nanoscale, or space elevators, you name it –- is about to arrive on the scene, whereupon the sudden ubiquity of this disruptive superlative technology will either unleash of its own accord the creative energies that will constitute the emergence there and then of the spontaneous market order the libertarians crave, or will at any rate introduce a profound destabilization that will break the crust of convention, bypass the intractable knot of pluralist stakeholder politics, overcome the regulatory impasse and thereby facilitate the emergence of this market order in due course.

In 1996, in an essay that has been widely (but possibly not exactly rightly) taken as an example of such libertarian technophilia, John Perry Barlow notoriously addressed himself in one of the founding political documents of internet technoculture to the “Governments of the Industrial World, you weary giants of flesh and steel.” To them he declared, “I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.”

The proximate inspiration for Barlow’s “Declaration of the Independence of Cyberspace” was in fact the sudden and overbearing intrusion of government "decency" censors and opportunistic regulators into a vibrant online culture about which they had taken no time to gain any sense of its customs, institutions, values, or technical capacities. “You have not engaged in our great and gathering conversation... You do not know our culture, our ethics... Our world is different.”

In Cory Doctorow’s essay, “The Carpterbaggers Go Home,” a comparable claim is directed from a self-appointed (there is of course no other kind as yet) representative of a network technoculture to an unwelcome interloper. But where Barlow addresses his attention to representatives of the State, Doctorow addresses himself instead to representatives of Business. Arriving after a decade of network-hype conjoined to fervent market enthusiasm, such a shift in itself felt in reading it for the first time rather like a watershed.

For Barlow, “Cyberspace consists of transactions, relationships, and thought itself, arrayed like a standing wave in the web of our communications. Ours is a world that is both everywhere and nowhere, but it is not where bodies live.” This apparent disavowal of the material basis for digital media and the ongoing imbrication of digital technocultures and their prosthetic practices in bodily life and material culture provoked for a time, and naturally enough, a whole cottage industry of criticism of Barlow’s piece.

Doctorow’s internet, in contrast, is swarming with bodies doing stuff on the streets where they live. “The spare-time economy” he writes of mobs of underemployed techies and geeks unleashed onto the world by the sudden collapse of the 90s internet boom, “has yielded a bountiful harvest of weblogs, Photoshop tennis matches, homebrew Web services and dangerously Seattlean levels of garage-band activity.” He goes on to vividly evoke “untethered forced-leisure gangs… committing random acts of senseless wirelessness, armed with cheap-like-borscht 802.11b cards and antennae made from washers, hot glue, and Pringles cans.”

But in a precisely analogous move to Barlow’s own, Doctorow ascribes to this swarming mess of shifting practices, protocols, and devices an essential nature that he contends is deeply antithetical to a particular kind of practice he disdains. While Barlow proposes that digitality conceived as a kind of ineffable spirit is invulnerable to the material coercions of worldly States, Doctorow proposes that internet practices are inherently improvisatory and unreliable in ways that will only rarely provide sustainable occasions for commercial profitabiltity.

“The Internet is loose and wobbly from the bottom up,” writes Doctorow. “TCP/IP is all about non-deterministic routing: Packet A and Packet A-prime may take completely different routes (over transports as varied as twisted pair, co-ax, fiber, sat, and RF) to reach the same destination... Internet… traffic… is positively Brownian, fuzzy and random and bunchy and uncoordinated as a swarm of ants randomwalking through your kitchen.” Here, Doctorow fatally reads the end-to-end principle through the discourse of negative liberty (a move that will return later in the term "Net Neutrality" among other places) and then treats this negative libertarian formulation as an ethos that defines the cyberspatial sprawl across its many layers: “Fuzzy at the bottom: TCP/IP. Fuzzy in the middle: message-passing protocols. Fuzzy on top: services.”

According to Doctorow this indeterminacy of the internet is deeply “antithetical to all our traditional notions about success in branding and business.” This is because “[b]usiness is built around reliability, offering a predictable quality of service from transaction to transaction. Even the messiest, one-off businesses are based on reliability; for example, estate auctioneers are predictable -- indeed, they provide the only touchstone of predictability in one-off sales, through the authorship of dependably consistent auction catalogs.”

But despite this presumed antitheticality, Doctorow ends up talking an inordinate amount about commerce after all: “[I]t's time to leave behind the idea of traditional reliability as value-proposition. The technical reality of the Internet doesn't care about the successful business strategies of yesteryear. The businesses that succeed in the unreliable world will find new ways of providing reliability.” And: “The businesses that succeed [will]... exploit the new reality rather than denying it.”

Given this mild collapse into corporate futurological speak, it comes as a more than mildly incongruous surprise when Doctorow stirringly concludes in the tonalities of a manifesto: “The close-enough-for-rock-n-roll revolution is a-comin' -- to the streets, comrades!”

The problem is that although his language mobilizes (even if I don’t doubt that Doctorow’s tongue was firmly planted in his cheek when he penned his revolutionary coda) the discursive paraphernalia and emotional excitement of radical political emancipation here, the piece is really one with no sense of the political in it in the least. One has to assume that the marvelous experimental, collaborative, playful prosthetic practices Doctorow highlights in his piece are valuable enough to protect and defend rather than simply to celebrate as they are unfolding. But to the extent that this is true, then it offers little comfort or protective cover to suggest that conventional commerce cannot finally profit from digital networks (a claim I wouldn’t bet the mortgage on in any case, let alone my life) if it happens that in their quixotic pursuit of such profits conventionally commercial interests are moved nonetheless to exploit, oppress, or undermine these practices he celebrates.

When Doctorow chuckles at the strategy of corporations to commercialize the Internet by “carv[ing] out pockets of sanity in the anarchy” there is an ominous sense in which “anarchy” is being treated as substantial here in a way that would presumably generate some kind of automatically and inherently efficacious resistance to onerous intervention. This is a very familiar trope for technocentric libertarians indifferent to or disdainful of the political as such, and to the demands of democratic politics in particular, except to the extent that one can find a trace of democracy in the duressed contracts, exchanges, and elite-orchestrated consumption practices available under market-fundamentlist construals of capitalism.

In an editorial entitled “Tech Bloom in Full Flower,” written nearly two years later for the Seattle Post-Intelligencer, in November 2003, Alex Steffen offered up an argument that reproduces the contours of Doctorow’s case, in ways that inspired the very same enthusiasm as well as the very same worries for me. Together with Jamais Cascio, Steffen is the creator and primary producer of the highly influential and simply incomparable WorldChanging Blog, which conjoins discussions of digital networked information and communication technologies with discussions of environmental sustainability, social justice and global development issues, and the provision of practical suggestions for the collaborative address of social problems.

“The conventional wisdom, during the Tech Boom, was that what drove innovation was the lure of giant piles of cash,” writes Steffen, framing his argument with the familiar trauma of the high-tech crash that ended the century. But “[t]hat idea now rubs shoulders with the Berlin Wall.” (Notice, once again, that the language here has mobilized the imagery of political emancipation.) “What makes creative people tingle are interesting problems, the chance to impress their friends and caffeine. Freed from the pursuit of paper millions, geeks are doing what geeks, by nature, really want to be doing: making cool stuff.” Against the drear banality of bourgeois profitability, Steffen reminds us that creativity is driven as often as not by the pursuit of pleasure and, as you will remember Clay Shirky pointing out already in a related context, a desire for attention.

And so: “In basements, garages and the empty warehouses that once held the Next Big Thing, tech-savvy folks are huddled over their laptops, working together online to give away the future. The result? We're seeing a surge of technological creativity that easily trumps anything we dreamed of with the dot-com PR guys crooning in our ears.”

Steffen then surveys a scene with which the reader will now be quite familiar, and provides a useful summary of the varieties of social software to which I devote no small amount of my own hopeful attentions:

There's the software, such as Linux, where teams of coders are working collaboratively in every corner of the globe to perfect what's rapidly becoming the world's most important operating system. "Peer-to-peer" programs, Napster's cousins, are busily creating networks of millions of users all giving each other software, movies, music, books -- nearly anything that can be digitized, whether they own it or not. "Distributed computing" projects use the idle power of volunteers' home PCs to tackle massive tasks such as mapping genes and scanning the stars for intelligent life.

There's the hardware. "WiFi" aficionados are manically building free, ubiquitous, high-speed wireless Internet coverage for entire cities. GeekCorps is off wiring the world's poor. Others are hacking together "Freekboxes" from free software and recycled parts and shipping them to developing world human rights activists.

There's even the content. Slashdot, spinning the planet's best "news for nerds" out of little more than the enthusiasm of its users, and Wikipedia, compiling the world's first collaboratively built encyclopedia. Or the countless Web logs, travel guides, online libraries and college classes (like MIT's OpenCourseWare). Or Craigslist and Tribe.Net and the thousand other new free ways to find a date, a roommate or an honest mechanic. There's even a new form of copyright, the Creative Commons license, to help you give stuff away while protecting it from theft -- a legal system for sharing, a "copyleft."

Taken together, Steffen describes these prosthetic practices as “The Tech Bloom” (in contrast to the commercial “Tech Boom” of the 1990s), an overabundant proliferation of free creative expression, collaboration, and quite a lot of making-do.

While I find it nearly as difficult to restrain my enthusiasm for the practices that exercise the imaginations of Doctorow and Steffen as I find it to restrain my distaste for the practices that exercise the imaginations of some market libertarians, what I want to register here, yet again, is my worry that there is a disavowal of the substance of the political in this discourse even while it depends on a figural conjuration of the political to express its ambitions and communicate its joys. In this it is the possible continuities rather than the conspicuous differences that I would want to highlight between the market libertarians and these, call them, "progressive experimentalists" here.

Steffen concludes his piece with a vivid tableau that would concretize the distinction between the two: “If the Tech Boom had a graven image, it was the bull on Wall Street. The Tech Bloom is more likely to be found dancing around the desert at Burning Man, the annual festival where money is taboo, everything's a gift and creative participation is synonymous with cool.”

But the trouble with “The Tech Bloom” is that it can too easily degenerate as a discourse into another variation on “spontaneous order,” say, Spontaneous Order with a Human Face.

Steffen’s ambitions, like Doctorow’s, seem to me to be profoundly worldly ones, but the problem is that it is only politics and its interminable reconciliation of contending aspirations that gives you a world.

Burning Man isn’t the world, it’s a festival.

And festivals don’t scale globally.

This is not an expression of curmudgeonly hostility for the festive as such, nor is it an expression of resignation that should mobilize the can-do spirit of various technophilianarchic troopers of the temperamental left or the temperamental right.

Festivals are festivals to an important extent precisely because they are not the world. It is not a matter of indifference to me that whenever they hanker after the status of polis festivals soon enough will decline into sewers (and this tends to be true literally as well as figuratively).

Festivals want a world, even as they take their momentary measure of distance from the world on which they depend. It is never only those who join up or join in to public practices who constitute the stakeholders in those practices. It is never true that even the best most beneficent efforts fail to exact their costs and impose their risks. It does not denigrate pleasure to note that pleasure is not the same thing as political legitimacy and that political legitimacy is indispensable to freedom. It does not denigrate voluntary participation to note that voluntary participation is not the same thing as democracy and that democracy has come to be indispensable to freedom. It does not denigrate collaboration to note that neither is collaboration yet the same thing as sharing the world with peers who differ ineradicably from us in their capacities, their knowledges, and their ends.

Collaboration, contestation, consent are public scenes that depend on a ritual artifice invigorated, consolidated, and transformed through our own recourse to it, articulated through moral and ethical norms, laws backed by legitimate force, contingent protocols for the exchange of information, services, and goods, and any number of architectural constraints. There is nothing "natural" or "spontaneous" about politics, and certainly not democratic politics. Technology will not deliver us a more perfect union: For democracies, the formation, ordination, and establishment of that more perfect unoin is a lot that falls inescapably and interminably to "we the people," ourselves.

Sunday, May 23, 2004

"Here's a little number I wrote the other day while out duck hunting with a judge.

The FCC Song

Fuck you very much the FCC
Fuck you very much for fining me
Five thousand bucks a fuck
So I'm really out of luck
That's more than Heidi Fleiss was charging me

So fuck you very much the FCC
for proving that free speech just isn't free
Clear Channel's a dear channel
So Howard Stern must go
Attorney General Ashcroft doesn't like strong words and so
He's charging twice as much as all the drugs for Rush Limbaugh
So fuck you all so very much

So fuck you very much, Dear Mr. Bush
For heroically sitting on your tush
For Halliburton, Enron, all the companies who fail
Let's send them a clear signal and stick Martha straight in jail
She's an uppity rich bitch
and at least she isn't male
So fuck you all so very much

So fuck you dickhead Mr. Cheney too
Fuck you and fuck everything you do
Your pacemaker must be a fake
You haven't got a heart
As far as I'm concerned you're just a pasty-faced old fart
And as for Condoleeza she's an intellectual tart
So fuck you all so very much

So fuck you very much, the EPA
For giving all Alaska's oil away
It really is a bummer
When I can't fill my hummer
The ozone's a nogozone now that Arnold's here to say:
"The nuclear winter games are going to take place in LA"
So fuck you all so very much

So what the planet fails
Let's save the great white males
And fuck you all so very much"

There is a saying that nothing is inevitable but death and taxes, but it is beginning to look, strangely enough, as if taxes will end up being the more inevitable of the two. In fact, reading the reports of gerontologists these days (for highly readable accounts of what I am talking about here and here are pieces by Aubrey de Grey) sometimes suggests that if we just put our tax dollars to work in the right places we might have the whole death thing licked in no time at all.

Once upon a time, aging meant a shrivelling of features, a creeping infirmity of frame, a diminution of countless capacities, the loss of libido and of memory, a disastrously rising susceptibility to disease. Already, pharmacological interventions are changing much of what it has meant to embark on the profound metabolic processes we customarily associate with aging. It is such a commonplace to cynically observe that face lifts and Viagra have not in fact conferred immortality upon the "foolish" and "superficial" Boomer Generation that I think we sometimes overlook just how profound a transformation these interventions have introduced into our sense of what we can properly hope for and expect from a human life.

With each passing year, indeed with each passing month, medical science offers up to swelling ranks of gerontocrats in the "developed" world genetic, prosthetic, pharmacological interventions into what have been called the “diseases of aging.” Although it is foolish to leap off the deep end and start talking in an alarmist or ecstatic fashion about the immanent arrival of human “immortality,” one has to wonder just how proximate is the date of the arrival of the longevity singularity, the threshold date when average life expectancy begins to increase one year per year in a sustained and sustainable fashion.

As our assumptions and expectations about what it must mean for a human body to age fall one by one in the face of medical intervention, I begin to wonder if there really is such a thing as "aging" in the first place. Is "aging" a word that will soon outlive its usefulness?

Maybe "aging" is a word like "instinct": Just as when we propose to explain a behavior in the natural world by positing an instinct as its source we are admitting our ignorance about its actual causes while following the forms of an explanation of causes, maybe the word "aging" is also one we have used to pretend mastery in the face of deep perplexity.

What remains of "aging" when "its" underlying processes and outward forms explode into a rich tableau of multiple and competing descriptions, each one of which then, in turn, becomes a field for intervention rather than a "natural" limit to contemplate?

Scientists are beginning to speak not just of "diseases of aging," now, but of "aging as a disease." And inspired by this new confidence, some technophiles are beginning to call for a "War on Aging." But is it really right to think of "aging" as a singular enemy we soon hope to be equal to, or is it that we are discovering that "aging" is another artifact of ignorance, a shorthand label for complex realities we never before could get a handle on? Won't it remain true, for example, that the post-senescent healthcare provision of actually living human beings will involve significantly different sorts of treatments and concerns than did their pre-senescent healthcare, just as pre-adolescent and post-adolescent healthcare differ in some significant respects? To render much or even all (surely a dubious hope for quite some time to come) the damage hitherto associated with statistically typical experiences of senescent processes negligible through medicine is not the same thing as eliminating senescence as such through medicine, is it?

Treating "aging" as a natural monolithic thing too easily misleads us into imagining that our interventions into its many forms amount to a comparable intervention into the other mysterious monoliths with which "aging" has been associated historically –- mortality, finitude, and so on. Quite apart from questions about whether or not any kind of narrative coherence for a legible "self" could be prolonged to the timescales celebrated by some enthusiasts of longevity and rejuvination medicine, there is nothing to suggest that increasing healthy post-senescent longevity would confer even bodily "immortality" on beings still prone at all to disease, mischief, or mischance. Nor should we imagine that tweaking our biology will confer on us some kind of godhood.

If anything one hopes the promise of the ongoing therapeutic amelioration of the processes and effects we have historically associated with "aging" will mean that we will cease to freight these pernicious processes with this enormous metaphysical baggage in the first place. Since even modest increases in average life expectancy, however healthy, will introduce unprecedented problems and promises for global stability, social justice, welfare provision, environmental sustainability among other things it seems best not to get too distracted from these urgent inevitabilities by dwelling on what looks to me like little more than confused vestigial theological meditations on eternity.

As we learn that there is not just one way that "aging" threatens to claim our lives, we set out upon the road along which ever more of our lives are our own to claim. Perhaps the point will not be so much to defeat "aging" as to proliferate its forms and so replace it simply with the story of our lives.

Saturday, May 22, 2004

"Last week, over 4,000 of you took action to call out elected officials who are denying students their voting rights. Remember, it is ILLEGAL for local election officials to claim that campus and school addresses are not considered a "permanent" residence when it comes to where a student can vote. Read more on this explosive issue. (http://action.rockthevote.org/ctt.asp?u=809835&l;=3352)

[viabOING bOING] Cory Doctorow points to this "Great NYT correction": "An article last Wednesday about South Africa's wine industry referred incorrectly to Thabani Cellars, a winery there. It is not minority-owned. (As a black man, the owner, Jabulani Ntshangase, belongs to the country's majority.)" It is because of this sort of thing, writes Doctorow, that he prefers the term "world-majority" over "minority" when referencing people of color. Suits me. And while we're at it, it doesn't hurt to remember that women also represent a marginal majority of the population, and so the commonplace "women and other minorities" is likewise a misnomer. It really is clear as daylight that the whole language of "minority" ascription largely amounts to the "language of a minority" -- namely, privileged white guys -- who deploy it first and formost to cloak themselves in a majoritarian mantle that claims to speak for humanity, but is in fact a wizened, provincial, and narrowly self-interested line of bull.

[via the fabulous Code Pink] How often do our key elected officials have the courage to "Speak Truth to Power?"

Nancy Pelosi, minority speaker of the House of Representatives, spoke out boldly and courageously in an interview with the San Francisco Chronicle charging Bush with incompetency. In so doing, Pelosi has broken through the veil of silence, and held Bush accountable for hundreds of deaths of U.S. soldiers in Iraq. Click here for full story...

CODEPINK supports Pelosi and we must take action now to let her know. Naturally, the backlash is coming at her fast and furious but she is not backing down. Click here for follow up story...

We urge you to take action today; email her, call her offices and tell her how inspired you are with her courage to tell these truths.

Please contact the office of Rep. Pelosi, by calling the DC office at 202-225-4965, the San Francisco office at 415-556-4862, or email sf.nancy@mail.house.gov

We also rally CODEPINK women and men to share your support across the country by writing a letter to the editor of your local paper. Send us what you write and we can post it to help others: webmistress@codepinkalert.org

Pass this on to other women and men you know, we must rally to support Nancy!

As an organization we stand ready to support spontaneous radical and progressive actions. For these weeks updates on actions at Halliburton and Ft. Stewart go to www.codepinkalert.org.

I write to you as a concerned citizen about HR 4077, "The Piracy Deterrence and Education Act of 2004." I agree with Public Knowledge and others that this bill is misguided and you should oppose it for the following reasons:

The Bill Could Make My Legal Downloads a Crime: I am concerned with the way in which the Congress seems to change copyright law at the whim of the content industry. This bill drastically changes how criminal copyright infringement is enforced, while at the same time it lowers the standard to prove such infringement. Recent raids by the Department of Justice under “Operation Fastlink” indicate that current law is more than sufficient to apprehend copyright infringers. The bill would effectively criminalize the built-in music sharing features of Apple's iTunes, the market-leading online music store, as well as uses of WiFi technology. Additionally, HR 4077 could make the use of copyrighted works for criticism and education a crime.

The Bill Takes Sharing My Information Too Far: The bill directs the FBI to facilitate the sharing of information among law enforcement agencies, Internet Service Providers (ISPs), and copyright owners concerning copyright infringement on the Internet. Essentially, content companies want ISPs to keep a database on their subscribers' online activities. It is disconcerting that at a minimum, the bill does not specify: what information ISPs may track; what information ISPs may share; or one single procedure that government agencies and / or copyright owners must comply with to gain access to the information. This is a violation of my privacy and my constitutional rights. The Digital Millennium Copyright Act set up specific procedures for obtaining information on alleged copyright infringers—procedures that protect my rights to privacy and due process. HR 4077 puts my constitutional rights in the hands of large content companies.
For these reasons, I ask that you vote against HR 4077.
Sincerely,

Ken Silber (whose writing regularly provides one of the few reasons to check out Tech Central Station, despite the, er, many reasons not to*), has written an interesting article “Pondering Animals,” in which he counters some of the more “expansive” and “romantic” claims made by human animals to justify their political commitments on behalf of nonhuman ones. Unlike many writers who engage in debunking exercises on this theme, Silber insists that “[r]ecognition of the limits of animal intelligence does not preclude a concern with animal welfare.” He goes on to say, rightly, “reducing animal suffering” is possible and worthy, but that we should be clearer-headed about the “choices and tradeoffs to be made.”

In a piece of mine, “Impurity, Solidarity, and Pleasure in the Politics of Vegetarian Identities,” I make claims in what I think is a complementary vein, and I want to offer up some of the choicer bits from it here (if for no other reason than that maybe in this shortened form people might be more willing to wade through the somewhat stuffy academese of the essay).

Although I am very interested in politics to ameliorate the suffering of nonhuman animals at the hands of the human ones (I've been a vegetarian for fourteen years), I think quite a lot can also be said about the ways the institutionalization of nonhuman animal suffering supports cultures of violence and exploitation between humans, and that this should also be more of a focus of animal rights advocacy. In my essay I claim, among other things:

“[W]hat we think of as culture... the public realm, the space of politics, the sphere of civility have all been produced and policed on the basis of an ongoing practical institutional and discursive demarcation of human from nonhuman animals...

“The task of this… demarcation among animals is [not] primarily to deny the reality of the richness of experience of those animals who fall to the wrong side of the divide (though this denial is sometimes an effect), but to dismiss the relevance of that suffering to ethical life.

“The crucial consequence of the human/nonhuman animal demarcation is the constitution of a sprawling class of beings whose pains and pleasures are figured simultaneously as real, but as pains and pleasures that do not matter. Further, the cultural and institutional machineries by means of which social divisions between human and nonhuman animals are drawn and maintained buttress and "naturalize" other vocabularies of oppression and acceptable violence.

“That is to say, racist, sexist, and heterosexist discourses (and other practices and institutions which accompany them), for example – not to mention discourses of childhood, madness, illness, foreignness, criminality – are always "bestializing" discourses as well; they always rely for their intelligibility importantly or in part on the figure of the being whose experience is real but does not matter and on the assignment of that status or an approximation of it onto another bestialized class of individuals….

“It is crucial to notice in this connection that even practices and vocabularies of liberation, whenever they are mobilized and organized by the conventional claim that "we will no longer be treated as ‘mere’ animals!" necessarily simultaneously undermine and reanimate certain conspicuously asymmetrical relations of power, by challenging their own location with respect to the human/nonhuman demarcation but otherwise fortifying it…

“It cannot properly be the ambition of vegetarian criticism or activism to eliminate this distinction altogether, however. Not even the most utopian advocates for animal rights expect – barring radical genetic and prosthetic interventions -- that one day nonhuman animals will find their way to the voting booth, or urge the propriety of extending to nonhuman predatory animals, for example, human standards of fairplay or the penalties of law.

"What is wanted instead is a reconceptualization of the political in which both human and nonhuman animals figure as actors and potential peers. This reconceptualization would be facilitated I think by the insistence that the relation of a human being to his ham sandwich or her leather jacket is a relation between animals, always-already a political relation between potential peers, and not a prepolitical, instrumental relation of human beings to the realization of their wants.”

*The slogan of TCS is "Where Technology Meets Free Markets" -- a fantasized meeting-place many of us progressive technology writers and activists know all too well already as the fabled land of Libertopia, that nowheresville where Climate Change Denial is cool, where Precaution is another Pinko Commie Plot, and where deregulation unto lawlessness is somehow (This Time!) A Good Idea. Atrios recently summarily dismissed what he called "Tech Central Stupid" as "stupid science from everyone's favorite PR-firm-masquerading-as-journalism." Harsh, but close to my own assessment of what goes on there all too often.

Wednesday, May 19, 2004

The Electronic Frontier Foundation (EFF) this week filed an amicus brief in Benavidez v. Shelley, a lawsuit brought by California's Riverside County and several disability rights groups against California Secretary of State Kevin Shelley. The suit seeks to delay Shelley's order, issued April 30, that every California voter have the option to cast a paper ballot in the November presidential election. In its brief, EFF, joined by VerifiedVoting.org, the California Voters Foundation, and VotersUnite!, argues that the court should deny this request. "There is substantial evidence supporting Shelley's decision," said EFF Legal Director Cindy Cohn. "A long list of incidents involving electronic voting machines in California and nationwide shows that Shelley's concerns about security are strongly justified."

In addition, EFF argues that the lawsuit sets up a false dichotomy between secure electronic voting machines and ones that provide access for the disabled to vote in privacy. Peter Benavidez, a partly blind man in Los Angeles, initiated the suit because he believed he wouldn't be able to vote without assistance if the state de-certified its electronic voting machines. But this isn't true. "There is technology available right now that would give the disabled access while not compromising security," said Cohn. She added that there is additional evidence showing that many of the electronic machines already in use aren't more accessible in practice than ones that produce a paper trail.

The false distinction between accessibility and security in electronic voting machines is also the subject of an EFF white paper released this week. "Accessibility and Auditability in Electronic voting," authored primarily by EFF Activism Coordinator Ren Bucholz, demonstrates that there are many already-existing technologies that would give California voters, including the disabled, a chance to leave a paper trail when they vote in November. Bucholz offers several ways for California counties to comply with Shelley's order, using currently existing technologies, while also remaining accessible.

"Opponents of Shelley's order imply that the push toward secure, verifiable elections must pull us away from accessible elections," Bucholz said. "But accessible, federally certified machines are available today, and more are scheduled for release in the coming months."

[New York Times, viaEFF] “A federal advisory committee says Congress should pass laws to protect the civil liberties of Americans when the government sifts through computer records and data files for information about terrorists.

“The eight-member panel, which includes former officials with decades of high-level government experience, found that the Defense Department and many other agencies were collecting and using "personally identifiable information on U.S. persons for national security and law enforcement purposes." Some of these activities, it said, resemble the Pentagon program initially known as Total Information Awareness, which was intended to catch terrorists before they struck, by monitoring e-mail messages and databases of financial, medical and travel information....”

“One of the panel's most important recommendations is to involve the courts in deciding when the government can search electronic databases.

“In general, it said, the Defense Department and other federal agencies should be required to obtain approval from a special federal court ‘before engaging in data mining with personally identifiable information concerning U.S. persons.’" Ya think?

“Permitted in principle by the laws of physics” is a larger set of propositions than “stuff that can be plausibly engineered” is a larger set of propositions than “stuff people actually want” is a larger set of propositions than “stuff people are willing to pay for” is a larger set of propositions than “things people still want in the longer-term that they wanted enough to pay for in the shorter-term.”

Glib futurist types are of course notoriously quick to pronounce outcomes “immanent” and “inevitable” (genetically-engineered immortality! nanotech abundance! uploading consciousness! superintelligent AI! bigger penises!), just because a survey of science implies to them that an outcome they especially desire or dread is “permitted in principle by the laws of physics.” But nested within that set like concentric rings on a tree-trunk are ever more restricted and more plausible sets, of which the target set at the center is the set of things people tend to still want enough over the longer-term that they are satisfied to pay (or have paid) for them.

I think it is a good exercise, and sometimes a good penance, for futurists to take special care around their use of the word "inevitable" to describe outcomes that are radically different from states of affairs that obtain today. My suspicion is that this is a word technophiles actually use more to signal the attitude, "okay, I'm not interested in arguing with you anymore if you don't accept the plausibility of the whatever wild-eyed future outcome I find especially appealing or appalling myself." Too often, “inevitable” is a word that signals an inability to chart an intelligible sequence of developmental stages that could plausibly delineate a path from where we are to whatever superlative state is imagined to be likely and attractive. And by plausible, I mean both technically and politically plausible.

Russ Kick of The Memory Hole points to an article in The Financial Times (subscription required), "US Turns to Private Sector for Spies." The article includes this quote from a Mr. Tittle who once worked for the NSA: "An awful lot of activity has been outsourced. Anything that has to do with collection or analysis of intelligence data is being done by the private sector." Kick comments that while no one is certain whether it actually saves the government any money to shift intelligence work to the private sector (though doubtless this is the primary pretext for the shift, which is accelerating rapidly under the present Bush Adminsitration), what is beyond doubt is that the shift is making it more difficult, and sometimes impossible, to determine just what the spooks are up to. Unlike the CIA, the NSA, and the rest of the Alphabet Soup Spook Archipelago, private corpoations are not subject to the Freedom of Information Act. When freedom rings, expect the phone to be tapped.

[viaPlanned Parenthood Action Network] ALL women should have the right to reproductive freedom--regardless of their occupation or geographical location, right? Unfortunately our government doesn't agree. Current law prohibits women in the United States military and their dependents stationed overseas from obtaining abortions in military hospitals unless the woman's life is in danger. The only exception to this rule is in cases of rape or incest, and even then the woman must pay for it herself. Remind Congress that women in the military should have the right to make their own reproductive choices.

Michael Anissimov, a director of the Singularity Institute for Artificial Intelligence commented on my earlier post on Roboethics, and this lead to a series of exchanges over the course of this afternoon which I wanted to blog here. The language gets a little precious and technical, so you'll have to forgive all that, but the issues perplex me deeply and I would welcome further comments and questions.

Michael writes: “The term "Singularity", when used in the correct, Vingean sense, as it was used repeatedly at this recent Foresight Gathering, can be a quite useful term. It simply refers to the fact that our model of the future gets a lot fuzzier when a smarter-than-human intelligence hits the block. In the same way that chimps could never have imagined the detailed consequences of the expanding of the prefrontal cortex, humans can certainly not imagine the detailed consequences of a mind with a completely different design than our own, running at totally different speeds, with the ability to improve its own architecture. Asking what the "cultural consequences" of such an event could be sort of misses the point; our entire history of culture, art, thinking, science, and technology is based on a 3-lb cluster of nerve cells running at 200Hz, without the ability to self-modify, filled with evolutionary luggage and ancestral brainware adapted to our specific niche. When you step outside of that same-old same-old, you are playing with different rules.

“Smooth function of postulated technologies is not necessary for the creation of transhuman intelligence. That would eventually be possible with, say, linearly accelerating technology. It's a problem at the intersection of cognitive science and engineering. I can sympathize with your distrust of the more extreme-sounding Singularity discourses, but let me remind you that we have much the same issue in nanotech - the survival of the human species is literally at stake in both issues. We live in a time where issues that superficially sound like "techno-apocalyptic survivalist ranting conjoined to the tropic paraphernalia of transcendental theology" now correspond to actual issues in reality. Just look at CRN, for example.

“Anyway, the serious concerns of Singularity activists (to be kept distinct from random technology enthusiasts using the word for fun) are summed up just perfectly in the Transhumanist FAQ:

"The arrival of superintelligence will clearly deal a heavy blow to anthropocentric worldviews. Much more important than its philosophical implications, however, would be its practical effects. Creating superintelligence may be the last invention that humans will ever need to make, since superintelligences could themselves take care of further scientific and technological development. They would do so more effectively than humans. Biological humanity would no longer be the smartest life form on the block."

To which I responded:

“Since even Vinge's canonical formulations on "singularity" contain both claims about "exponential runaways" as well as claims about the impact of an arrival of either artificial or augmented greater-than-normatively-human intelligence, I think it is important not to be so sure confusions are just a result of sloppy or "incorrect" terminology. The confusions go deeper than that, it seems to me.

“Remember that to an important extent "fuzziness" in prediction has always been central to our understanding of what "futurity" means. To the extent that singularity-talk proposes a kind of hyper-futurity in this way, I think it makes sense to interrogate just what inspires us to expect this (and apparently why so many seem to desire it so).

“I think I do miss your point that "the entire history of culture, art, thinking, science, and technology is based on a 3-lb cluster of nerve cells running at 200Hz," since, depending on precisely what you mean by the phrase "based on," this is not a point that makes much sense to me. Heap up a pile of 3-lb nerve cell clusters and you're not going to find culture or art or science conjuring up there. (And I say this as a materialist.)

“Your point about smooth function not being a necessary assumption, since linearly accelerating progress can produce greater-than-normatively human intelligences seems absolutely right to me, but it no longer is clear to me why under such a state of affairs it clarifies much to go into "singularity"-talk in the first place to think these eventualities through. This is, of course, why I called attention to the "roboethics" rubric in the first place.

“I think we agree about how the more extreme, complacent, transcendentalizing freightings of the term singularity are often contrary to clear thinking and necessary collaborative work to ensure the developmental outcomes we desire from technology. I guess ultimately my trouble with "singularity" ends up being that it is hard for me to figure out just what singularity-talk contributes to discourses about the quandaries of technological intelligence augmentation that are actually on offer.

“Finally, I don't see how the arrival of greater-than-normatively human technoconstituted intelligence will deal a greater or more decisive blow to anthropocentrism than did Copernicus or Freud already, say. To the extent that it makes sense to be anthropocentric right now (which is to say, not very much), it seems to me new technologies by expanding the reach of our capacity to re-write ourselves in the image of our values will if anything expand (while certainly changing) what anthropomorphism might mean to us.”

Later this afternoon, I posted a comment in an unrelated conversational thread on wta-talk:

"Roboethics," unlike some superlative-state AI activism/speculation seems concerned less with questions such as "should AIs have rights" or "will AIs be righteous" than with questions like what are the ethical uses to which robots and automation more generally can be put and with what consequences (read Marshall Brain's blogs to see just how real-world relevant such questions are already), and I doubt ordinary people will laugh at these questions inasmuch as their livelihoods and sometimes their lives are on the line here right now.

And Michael responded to this in a way that more or less picked up the conversation reproduced above where we left off. I quote my own response to him (and Michael’s comments and objections are interspersed within my responses to him):

[> Michael wrote:]

> The problem with overfocus on nearer-term applications of robots and
> automation is that it neglects the slightly-longer-term risk of software
> programs with human-surpassing intelligence.

I responded:

This would only be true if something about a nearer-term focus precludes deliberation about longer-term and superlative-state risks/benefits. I don't agree that it does. In fact, I suspect that a nearer term focus provides tools for more reasonable expectations about the longer term than does a more purely "long-term" discussion that leapfrogs all the proximate
developmental stages that will likely stand between where we are and where we'll be.

> This also ties in with
> conventional issues of robots and automation - say I use nanocomputers,
> in 2014 or whenever they are available, to run an intelligent software
> program whose purpose is to perform weather surveillance. Without the
> complex goal system structure necessary to see human beings as sentient
> entities worthy of respect, there may be little stopping such a software
> program from lapsing into recursive self-improvement and paving over the
> surface of the Earth with sensors in order to maximize its ability to
> determine weather patterns.

There are all kinds of assumptions about just what form superlative-state AI would take in this scenario. How do you know that the kinds of superintelligence that will perform sophisticated expert functionality will be of a kind to produce entitative goal systems for which the category "respect for biological sentience" is even relevant? Why not assume a pre-entitative monomaniacal expert system for which the relevant safeguards will be sequestration and an elaborate (likely also machinic) oversight regime? Why assume in advance that you are making a kind of being that requires a superego, rather than a big lumbering piece of machinery that might go out of control unless you can shut it off or stop it from doing irreparable damage? (Assume for the moment that I already know all of the obvious things you think this objection of mine signals I don't know, and then take the objection seriously anyhow.)

> Even if you figure that the probability of
> this happening in a given year or whatever is only 1%, 6 billion lives
> could be on the line. Which would make that given scenario more worthy
> of attention than a scenario in which, say, a given automation advance
> seems to entail a 50% risk that the salaries of a mere 100,000 people
> drop by 10%. (Which seems to be what you are talking about.)

Is the scenario you are talking about worthy of attention? Certainly it is! Don't mistake my objections. I just don't estimate the kinds of predictions that tend to preoccupy singularity-enthusiasts/worriers as plausible enough to devote much of my own attention to them. here's nothing wrong with the fact that bright earnest people do take them more seriously than I do. Still, it is perfectly appropriate to argue about why people weight these expectations differently, what kinds of motivations may contribute to these different weightings, what the rhetorical effects of various kinds of arguments for them are, what we can say about cultures-of-belief for which these preoccupation and not others dominate, etc. Now, about the specific numbers you are throwing around here, permit me to return to that in moment.

> Just to quickly answer the two questions that were brought up;
>
> Q: Should AIs have rights?
> A: It probably doesn't matter what humans say except insofar as they
> can have an input today in the creation process, since any
> human-surpassing AI will be able to do whatever it wants because of its
> superior intelligence and the technologies it will develop.

See, I think this demonstrates a really pernicious effect of taking singularity-talk too seriously in its transcendentalizing mode. Look at all that you're apparently willing to give up here, presumably reluctantly, as a consequence of what you no doubt imagine to be a "hard-boiled" contemplation of various developmental extrapolations.

If greater-than-normatively human intelligence emerges as a function of augmentations, or as a property of technologically-assisted collaboration, say, then certainly it will still matter what humans say -- and these are two superintelligence scenarios, not even more incrementalist accounts for which the human (whatever that will come to mean) still manifestly matters here. I am not sure I can grant even what you mean by "input in the creation process" for an autonomous superintelligence, to the extent that I am not ready to grant that even there the most plausible scenarios will involve artificial-conscience-engineering rather than analogues to the sorts of failsafes big dumb more-than-normatively powerful than human machines already require. Look at how very few actual superlative projection-possibilities are inspiring how sweeping a sense of what the future is likely to look like for you, and look how it is restricting what you are willing to entertain as likely and as worthy of intervening into now. You've done a lot of math, but have you done the right math?

> Q: Will AIs be righteous?
> A: Hopefully in ways that either all or the vast majority of humans see
> as desirable. This depends most on how initial conditions are set up.

> Software programs that can develop
> nanotechnology (or better manufacturing technologies) with their
> accelerated, transhuman brains and use them to kill off all humans
> unless they specifically see humans as entities worthy of value, are
> worth worrying about.

Well, sure, I guess. But why assume that this is a different sort of discussion than gray goo already is? Why be so confident that more-than-normatively human intelligent expert systems will have "brains" "they" use in the first place? Why be so sure anything will be "worthy" for "them" in the sense you mean? A rat gnaws a wire in Kamchatka and a thermonuclear device is launched into space precipitating apocalypse (or whatever). Why are we talking about some geeks in Sunnyvale creating a possibly malicious superhuman AI again? (I know, I know -- I kid because I love!)

> Whether or not normal people laugh at the topic
> is an issue of memetics to be taken up *after* we determine the
> differential importance of addressing any given risk. If I can spend 20
> minutes convincing someone that the negative impact of a given
> automation process could put 100,000 people out of a job, thereby
> lowering the likelihood of that negative impact by, say, 1%, then that
> is peanuts relative to the utility of spending 20 minutes convincing
> someone that the potential negative impact of superintelligence could
> entail the demise of humanity, and having even a 0.00001% positive
> effect in the that direction.

People who talk about "singularities" with one breath should hesitate to start flinging out hard numbers characterizing risk-assessments of superlative state tech with the other breath. That sounds harsh, but I don't mean it that way at all! I am registering a real and ongoing perplexity of mine (after a decade of talking to singularitarian-types). Where are your caveats? Where can your confidence be coming from here? Isn't your whole point of contention with me that my frame of reference for assessment is derived from too modest and linear a set of developmental assumptions? But precisely to the extent that you break from such a predictive frame aren't you required to qualify your claims and weight your assessments in light of those qualifications? It isn't enough to posit existential threats willy-nilly and imagine that their scale alone justifies a primary focus on just those developmental scenarios that have come to dominate your fancy.

> People can laugh all they want, our goal
> should not be to get every last person to take us seriously, but to
> actually maximize the probability that we transition into a peaceful and
> enjoyable future.

I definitely agree with you. People find laughable any number of things I worry about and expect to happen, too. But this gives me special responsibilities in making my case, it seems to me. I need to be especially generous in explaining my reasons to those who disagree with me, I need to be especially attentive to the ways in which non-rational factors (fancies, fears) may contribute to my own sense of the allure of my very unconventional beliefs, I need to expect to chart very detailed argumentative pathways to lead my interlocutors from where they are to where I want them to be, I need to take very seriously the objections and alternatives to my own very controversial beliefs, etc.

> It is my belief that lowering the probability of existential risk is
> what transhumanists should *really* be concerned about. (I think Nick
> Bostrom might agree.) There is already enough cultural and technological
> momentum already present that the eventual availability of transhuman
> modifications seems extremely likely, *given that we don't wipe
> ourselves out first*.

I definitely agree with you. For the next half-century my own expectation is that post-humanist (and let us hope, not de-humanizing) technologies will be primarily genetic, prosthetic, and cognitive. I personally think the focus needs to be on ensuring that costs, risks, and benefits of technology development are fair so as to limit their destabilization effects and maximize general welfare, and to ensure that funding and oversight is internationalized so that they don't incubate new kinds of devastating arms races and unmanageable terrorism. I honestly don't see how the word "singularity" can introduce much but confusion and distraction into these many necessary conversations. That is why I express these concerns aloud. Not because I think there is anything logically the matter with the ideas in the abstract, or with the good people (among them friends of mine!) who enjoy them.

> If we mess up and kill ourselves, then no
> transhuman future, no space colonization, no uploading, no immortality,
> no nothing. The instant your probability estimate of superintelligent
> AIs wiping out humanity goes from zero to anything above that, say one
> in a million, it immediately becomes something worth worrying about.

I only disagree with you because I think the number of existential threats more likely than the particular threat of a hard-takeoff totalizing superintelligent AI singularity seem to me overabundantly too numerous themselves (bioengineered pathogens, proliferating nukes and other wmd, weoponized nanoscale devices, catastrophic climate change, etc.) to justify spending too much time on singularity-talk however compelling it might be in principle. YMMV, of course!

> Robots that take our order at McDonalds are basically worth ignoring.
> Existential risk is the number one issue, and I think that a wise
> strategy for countering such risk involves covering two main bases -
> bits and atoms - AI and nano - which is what we have SIAI and CRN for.

My guess is that dealing with robots at McDonalds now is more likely than thinking about superlative-state tech to provide the practical, strategic, cultural, problem-solving, conceptual, networking resources from which eventually will come the very tools we will later need to deal with the actually superlative state tech that will evolve out of our contemporary quandaries to become our shared future. I agree with you about CRN, and hope you do not take offense at my hesitancy to extend a comparable support for the efforts of SIAI. For now, my focus will remain on the bioethics, neuroethics, and roboethics of proximate technology development, rather than on superlative states."