Friday, January 30, 2015

In the City all the livelong day. This morning I begin with discussion of Laurie Anderson on technology as the belly of the uncanny and Paul Miller/DJ Spooky on the racist montage/mix tape in which cops killing black men calls back Breton's surrealist reverie. After a break we dive into instrumental rationality via Arendt, Heidegger, and Morozov. That's nine to noon. From one to four, it's Fontenelle's weird racist pseudo-science on the ancients and moderns followed by Kant's existential freakout in History with a Cosmopolitan Purpose and WEB Du Bois stage-setting black skin white masks with Double Consciousness -- history as a problem for you versus you as a problem for history. After break, Oscar Wilde and paradox, property, and circling back to a subversive take on the ancients and moderns again. I'm sleepy, and tho' thankfully past my cold the effects linger on, eagerly awaiting incubation in the stuffed swaying BART train. Blogging today will be low to no.

Thursday, January 29, 2015

I've been teaching Paulina Borsook for all fifteen of the years she has been "absent" (and am teaching her again this term in my Digital Democracy, Digital Anti-Democracy seminar in the City) so thesepieces are amplifying an abiding presence for me. Still, Geert Lovink (who introduces the pieces, and is somebody else I am teaching this term as usual) has provided a service and a pleasure with these. Her attitude toward libertechbrotarians and Google buses strongly suggests they should put Borsook in charge of ValleyWag to see if it can jump back the shark it jumped when they booted Biddle.

Google, a long-time supporter of Singularity University (SU), has agreed to a two-year, $3 million contribution to SU's flagship Graduate Studies Program (GSP). Google will become the program's title sponsor and ensure all successful direct applicants get the chance to attend free of charge. Held every summer, the GSP's driving goal is to positively impact the lives of a billion people in the next decade using exponential technologies. Participants spend a fast-paced ten weeks learning all they need to know for the final exam—a chance to develop and then pitch a world-changing business plan to a packed house.

"Exponential technologies" is a short hand for the false and facile narrative superlative futurologists spun from Moore's Law -- the observation in 1965 (the year I was born) that the number of transistors on an integrated circuit had been roughly doubling every two years, and the paraphrase of that observation into a law-like generalization that chip performance more or less doubles every two years -- into the faith-based proclamation that this processing power will inevitably eventuate in artificial intelligence, and soon thereafter a history shattering super-intelligence that will control self-replicating programmable nanoscale robots that will provide a magical superabundance on the cheap and deliver near immortality through prosthetic medical enhancement and the digital uploading of "informational soul-selves" into imperishable online paradises.

The arrival of superintelligent artificial intelligence is denominated "the Singularity" by these futurologists, a term drawn from the science fiction of Vernor Vinge, as are the general contours of this techno-transcendental narrative, taken up most famously by one-time inventor and now futurological "Thought Leader" Ray Kurzweil and a coterie of so-called tech multimillionaires like Peter Thiel, Elon Musk, Jaan Tallinn all looking to rationalize their good fortune in the irrational exuberance of the tech boom and secure their self-declared destinies as protagonists of post-human history by proselytizing and investing in transhumanist/singularitarian eugenic/digitopian ideology across the neoliberal institutional landscape at MIT, Stanford, Oxford, Google, and so on.

That most of these figures are skim-and-scam artists with little sense and too much money on their hands goes without saying as does the obvious legibility of their "technoscientific" triumphalism as a conventional marketing strategy for commercial crap (get rich quick! anti-aging! sexy-sexy!) but amplified into a scarcely stealthed fulminating faith re-enacting the theological terms of an omni-predicated godhead delivering True Believers eternal life in absolute bliss with perfect knowledge. Not to put too fine a point it, the serially-failed program of AI doesn't become more plausible by slapping "super" in front of the AI, especially when the same sociopathic body-loathing digi-spiritualizing assumptions remain in force among its adherents; exponential processing power checked by comparable ballooning kruft is on a road to nowhere like transcendence; and since a picture of you isn't you and cyberspace is buggy and noisy and brittle hoping to live there forever as an information spirit is pretty damned stupid even if you call yourself a soopergenius.

Since the super-intelligent and nanotechnological magicks on which techno-transcendentalists pin their real hopes are not remotely in evidence, these futurologists tend to hype the media and computational devices of the day, celebrating algorithmic mediation and Big Data framing and kludgy gaming virtualities like Oculus Rift and surveillance media like the failed Google Glass and venture capitalist "disruption" like airbnb and uber. That this is the world of hyping toxic wage-slave manufactured landfill-destined consumer crap and reactionary plutocratic wealth concentration via the looting and deregulation of public and common goods coupled with ever-amplifying targeted marketing harassment and corporate-military surveillance should give the reader some pause when contemplating the significance of declarations like "GSP's driving goal is to positively impact the lives of a billion people in the next decade using exponential technologies."

The press release suavely reassures us that "Google is, of course, no stranger to moon shot thinking and the value of world-shaking projects." I think it is enormously important to pause and think a bit about what that "of course" is drawing on and standing for. It should be noted what "moon shot thinking" amounts to in a world that hasn't witnessed a moonshot in generations. There are questions to ask, after all, about Google's "world-shaking projects" advertorially curating all available knowledge in the service of parochial profit-taking, all the while handwaving about vaporware like immortality meds and driverless car-culture and geo-engineering greenwash. There are questions to ask about the techno-utopian future brought about by a "grad school" at a "university" for which "the final exam" is "a chance to develop and then pitch a world-changing business plan to a packed house." I will leave delineating the dreary dystopian details to the reader.

Tuesday, January 27, 2015

Whenever the profits of plutocrats are purchased by the precarity of the people, money is always speech and speech is always moneyed. Securing a basic income guarantee would create conditions under which for the first time it would be possible for money NOT to be speech.

Monday, January 26, 2015

Robot Cultist John G. Messerly hilariously declares religion doomed in a palpably religious techno-transcendental screed published over at hplus magazine (h+ stands for "humanity-plus" for all you merely human mehum humanity-minuses out there, in case you were wondering).

I am an atheist myself, but I tend to be rather cheerfully nonjudgmental about the issue. Both my sense of the force of aesthetic sublimity and my awareness of the demands of a faith in democracy connect with at least some of what religiosity can be for at least some of the variously religious, and so I do not find it so easy to dismiss all faith as such as many other atheists seem to do -- at least many of the atheists who seem to be getting a lot of media attention lately.

That said, I do find it easy to condemn efforts to politicize organized religiosity and militarize moralizing. Since this kind of militant evangelism threatens the differently religious quite as much as it threatens the non-religious, the majority of my allies on this issue are religious as I am not, exactly as you would expect if you are thinking sensibly about the question. As a champion of scientific discovery and practical problem-solving, I also condemn efforts to substitute fundamentalist articles of faith for warranted scientific beliefs where matters of prediction and control are concerned. And again, plenty of theologians are as concerned about the distortion of faith when it is misapplied to instrumental concerns as they are, and I am, concerned about the distortion of science by these misapplications.

What is extraordinary about the transhumanoids, singularitarians, techno-immortalists, digi-utopians, nano-cornucopiasts of techno-transcendental futurology is how readily they peddle what are palpable wish-fulfillment fantasies, eschatalogical narratives, end-time narratives (often quite obviously citing theological and mythological archives for conceits, images, frames as they proceed) as if they were actually thought-experiments, scientific hypotheses, or responsible public policy proposals. I have to say that the prevalence of deceptive and hyperbolic advertizing discourse and parochially extrapolative blue-skying across the neoliberal public and institutional terrain has unquestionably enabled futurologists to get away with this con artistry, indeed futurology is something like a reductio ad absurdum of neoliberal assumptions and aspirations, norms and forms, exposing the dangerously delusive, uncritically symptomatic, unsustainably infantile faith-based initiative of extractive-industrial complacent-conformist-consumerist corporate-militarism.

To say the least, of course, there simply is nothing to commend digi-topian cyberangel uploads in Holodeck Heaven over more conventionally religious conceptions of an afterlife. There is nothing to commend superintelligent post-biological AI over more conventionally religious omniscient sky-daddies. There is nothing to commend eugenic talk of enhancement and optimality over religious zealots declaring non-normative morphologies signs of Satan demanding consignment to the flames. There is nothing to commend the body-loathing of futurists pining to be digitized out of mortality or virtualized out of gravity or bio-enhanced out of vulnerability or artificially super-intellectualized out of error over the religious death-cults that inspire forced pregnancy zealotry and private arsenals and the inevitably mistaken execution of some of their fellow citizens and belligerent war posturing and the neglect of poor, cold, ill, hungry, homeless fellow-citizens in the name of a "culture of life." There is nothing to commend nanobotic superabundance over genies-in-a-bottle or prosperity theology.

And there is nothing to commend robocultic wish-fulfillment pseudo-science over religious creationist anti-science. Declaring this pseudo-science a championing of science merely adds insult to injury.

Since 2012, many concerns have been raised by faculty,
students, staff, alumni, and supporters about a lack of transparency and
accountability in the key decision making processes undertaken by the
administration of the San Francisco Art Institute. The quite sudden
recent suspension of the Urban Studies program seems to be yet another
stunning confirmation of these worries.

To say that this decision has been more opaque than
transparent is an understatement: Few of the actual stakeholders to such
a momentous change at SFAI were aware that it was even under
consideration until a public e-mail announced that the decision and the
process justifying it had already concluded.

A review of the program is said to have taken place in the
Spring of 2014, but few faculty or students involved in the actual
program seem to have participated in either the review or the
decision-making process. The timing of this review coincided with the
absence of key figures in and champions of the program, and the results
of the process are at odds with the results of other recent and ongoing
review processes that involved wider participation of the relevant
stakeholders.

While program changes ultimately depend on the Dean, they
are supposed to be addressed in the Faculty Senate. The Urban Studies
program was not suspended after a discussion or with a consenting vote
of the Faculty Senate.

Dean Schreiber recently commented that Urban Studies does
not offer a “robust experience” for students. In the absence of a
definition of terms this judgment is hard to gauge, but it is difficult
to understand how the accompanying proposal of a BFA with an urban focus
more centered on studio classes than interdisciplinary courses could
possibly be more robust on any definition. Re-assigning our
institutional engagement with urban questions to studio classes will
inevitably introduce fissures in the theoretical formulation of the
urbanity in question. This envisioned change is also countering the
current trend of expanding Urban Studies Master’s programs across the
country’s academic institutions.

Indeed, a host of courses directly taking up historical
and theoretical questions of post-colonial pan-urbanity, environmental
sustainability, urban poverty, street protests, immigrant
communities―many of them focusing on urban movements in San Francisco in
particular and taught for years by activists and participants in the
movements themselves―such as Laura Fantone, and the renowned local
historian Chris Carlsson―are vanishing from the 2015 curriculum. The
loss of these engagements and the silence of these voices wounds our
Institute. Far from a minor shift in focus these changes can only be
understood as a radical dis-engagement with the urban as a real priority
at SFAI.

As recently as SFAI’s Strategic Plan for 2013, the
President and the Board of Trustees declared that “SFAI will strive to
further improve its operations and heighten its ambitions in the service
of art, artists, and the Bay Area community.” The decision to suspend
Urban Studies contradicts SFAI’s long-held commitment to connect our
students to the City and the greater Bay Area artistic community of
which we are a part.

The San Francisco Art Institute is a school sited at the
thriving heart of a world-historical city. We live and teach and create
and connect in the midst of the urgent distress of artist and gallery
evictions, in the scrum of venture capitalism’s “disruptions” of public
goods and public services, in the face of the Silicon Valley steamroller
of reductive tech-talk, in the creative ferment of street protest, all
right here, all right now. In such a place and at such a time, at the
very moment when other art schools and art programs are taking up the
urban with renewed energy and vigor as an indispensable motor of
convivial creativity and transformative imagination, it is difficult
indeed to understand what considerations have driven this rash decision
to suspend an already established and accomplished program here.

Friday, January 23, 2015

Teaching term has begun again, and I have two lectures on Fridays, first my undergraduate seminar on digital democracy and anti-democracy, from roughly nine to noon, then my graduate survey of critical theory from roughly one to four. I'm lecturing on my feet the full three hours for each, without much in the way of discussion actually, so today is more or less back to back openings for what will always be a long double bill. Flanked by bus and train commutes right around rush hour it's looking like a long slog. Once upon a time I would begin my courses with a gentle overview of the syllabus and policies and an early leave-taking, but since I just have one bite of the apple each week these opening look to be leaps right into the deep end of the pool, maps of conceptual and argumentative terrain that will take up the whole time. This morning, technology and democracy as sites of contestation, and the traffic between material and immaterial in tech-talk, this afternoon, critical theory as post-philosophy, social, cultural, exegetical traditions, fact, figure, fetish, etymological fantasias, much more. So, you should expect blogging to be low to no today and maybe tomorrow if I'm still in recovery/hungover.

This course will try to make sense of the impacts of technological
change on public life. We will focus our attention on the ongoing
transformation of the public sphere from mass-mediated into peer-to-peer
networked. Cyberspace isn't a spirit realm. It belches coal smoke. It
is accessed on landfill-destined toxic devices made by wretched wage
slaves. It has abetted financial fraud and theft around the world. All
too often, its purported "openness" and "freedom" have turned out to be
personalized marketing harassment, panoptic surveillance, zero comments,
and heat signatures for drone targeting software. We will study the
history of modern media formations and transformations, considering the
role of media critique from the perspective of several different social
struggles in the last era of broadcast media, before fixing our
attention on the claims being made by media theorists, digital
humanities scholars, and activists in our own technoscientific moment.

Provisional Schedule of Meetings

Week One, January 23: What Are We Talking About When We Talk About "Technology" and "Democracy"?

"The philosophers hitherto have only interpreted the world, but the point is to change it."--Karl Marx.

This
course is a chronological and thematic survey of key texts in critical
and cultural theory. A skirmish in the long rivalry of philosophy and
rhetoric yielded a turn in Marx, Nietzsche, and Freud into the
post-philosophical discourse of critical theory. In the aftermath of
world war, critical theory took a biopolitical turn in Arendt, Fanon,
and Foucault -- a turn still reverberating in work on socially legible
bodies by writers like Haraway, Spivak, Butler, and Gilroy. And with the
rise of the neoliberal precariat and climate catastrophe, critical
theory is now turning again in STS (science and technology studies) and
EJC (environmental justice critique) to articulate the problems and
promises of an emerging planetarity. Theories of the fetish define the
turn of the three threshold figures of critical theory -- Marx,
Nietzsche, and Freud (commodity, sexuality, and ressentimentality)
-- and fetishisms ramify thereafter in critical accounts from Benjamin
(aura), Adorno (culture industry), Barthes (myth), Debord (spectacle),
Klein (logo), and Harvey (tech) to Mulvey and Hall (the sexed and raced
gaze).

Contextualizing
Contemporary Critical Theory: The inaugural Platonic repudiation of
rhetoric and poetry, Vita Activa/Vita Contemplativa, Marx's last Thesis
on Feuerbach, Kantian Critique, the Frankfurt School, Exegetical and
Hermeneutic Traditions, Literary and Cultural Theory from the
Restoration period through New Criticism, from Philosophy to
Post-Philosophy: Marx, Nietzsche, Freud; the postwar biopolitical turn
in Arendt, Fanon, and Foucault; and the emerging post-colonial,
post-international, post-global planetarity of theory in an epoch of
digital networked media formations and anthropogenic climate
catastrophe.

Wednesday, January 21, 2015

Horrifying Kickstarter Staff Pick. The mad monotonous piano scales in the background of the pitch perfectly capture the derangement produced by incessant insipid vapid algorithmic chatter. Extra points are due to the marketing jiujitsu of turning "hands-off" help into a feature. Of course it's "hands off": It's a goddamn lobotomized Disney Princess cartoon interrupting you with autocorrect suggestions wherever you go.

Some of our bedrock sectors, like our auto industry, are booming. But there are also millions of Americans who work in jobs that didn't even exist ten or twenty years ago -- jobs at companies like Google, and eBay, and Tesla. So no one knows for certain which industries will generate the jobs of the future.

I'm no fan of America's ruinous and idiotic car culture -- which arose out of the postwar futurological cheerleading of "The Greatest Generation" -- but comparing the titans of Fordist manufacturing with SillyCon Valley's celebrity-CEOs and techbro VCs is patently ridiculous. It is notoriously the case that firms in the IT sector with market capitalization comparable to large retailers or manufacturing companies employ fractionally as many people than these traditional sectors do.

About those tech giants name-checked as exemplars on whom the President means to pin our jobs future? Well, Google employs between 37,000 and 52,000 people; eBay employs about 32,000 people; and Tesla motors employs about 6,000 people. That's far from the kind of stunning employment contribution these enterprises were made to symbolize in tonight's State of the Union.

According to the Bureau of Labor Statistics the Construction and Manufacturing sectors employ over 12% while the Information Sector employs under 2% of US jobs. And this is despite the recent decline in manufacturing, which has resulted from race-to-the-bottom trade policies rather than some irresistible digital destiny in any case, and hence could be reversed should our policies come to reflect fairness and sustainability priorities as they should on Obama's own terms.

It seems a bit odd, I must say, the way the speech corralled Tesla with Google and eBay, really, since elsewhere Obama's speech (in the snippet quoted above, for example) takes pains to distinguish "new" IT from "old" manufacturing. I guess it makes a difference when the auto manufacturer is making marginal publicity-hogging boutique-green electro-Edsels. All that hype just has the zing of new now next! Indeed, what all these companies actually share most of all is the techno-transcendental coloration imbued by our own generation's futurological flim-flam operators, peddling digitality and AI and cartoon-tech like Musk's LEO amusement park rides and Hyperloop stunt.

Even Obama's much-anticipated and discussed proposal to make two years of community college much more widely available was freighted with futurological framing. While I am heartened by any commitment to a real public investment in our capacity for collective problem-solving, I was disheartened again to find this proposal unexpectedly framed in the speech as a way to "train workers to fill high-paying jobs like coding... and robotics." As if coders and roboticists can overcome jobs lost to downsizing and outsourcing and financialization -- downsizing, outsourcing, and financialization indispensably enabled and abetted by, that's right, coders and automation!

And although I strongly favor the President's call for public investment in a faster and more open internet -- I must say that for one thing I am far from assured that the President's panoptic sorts comport with a sense of openness worthy of the name; and for another thing I am well aware that the reason Europe has an incomparably faster and cheaper and more reliable internet than Americans do right now has everything to do with regulations and nothing to do with "the digital innovators and entrepreneurs [who] keep reshaping our world" to whom Obama genuflected in his speech. I have a song in my heart for fact-gathering social workers and labor economists with clipboards like good Democrats are supposed to do, but the upward-failing skim-and-scam operators of the "new economy" Obama praised over and over again in the big speech tonight -- so many of whom slurp up government cash while crowing about their libertechbrotarian cyborg-individualism and hostility to Big Government -- just make me want to ralph.

Like the Clinton and Gore embrace of the irrational exuberance of the fin de siecle dot.bomb, Obama's embrace of digi- nano- AI- nonsense reveals the profound susceptibility of the partisan Democratic left to assimilating reactionary politics through uncritical "technology" discourses that rationalize corporate-military budgetary priorities and conduce to mass consumer-complacency and circumventions of democratic deliberation by self-appointed technocratic and designer elites. It is enormously important that the Democratic Party has embraced macroeconomic literacy, climate science, Darwinian evolution, public healthcare, safer sex eduction, medical research, renewable infrastructure spending, fact-based harm-reduction policy-making, and so on against the outrageous anti-intellectualism and science-denialism of today's GOP. But these Democratic commitments must be informed and not simply fetishistic.

I am a champion of real public space programs for discovery and research toward the public good -- which is why I refuse to celebrate the displacement of this vision by the Vegas dreams of for-profit space hucksters foisting low-earth orbit planes and orbital love motels on us while promising to colonize the solar system and mine the asteroids in an imperial gold-rush get-rich-quick future re-run of manifest destiny. I am a champion of real public investment in renewable, resilient energy, communication, and transportation infrastructure and of real investment in medical research and access -- which is why I refuse to celebrate the displacement of this vision by greenwashing geo-engineers or hucksters of enhancement and longevity moonshine for superannuated boy-band Boomers.

Democrats have to take care not to fall for pseudo-science nor for reactionary policies with a "tech" patina: like MOOCifying education "reformers," like budget hawks who pretend miracle medicine justifies raising the retirement age, like suave Big Data miners and masseurs treated more and more like wizards in electoral and marketing campaigns (which are becoming less and less distinguishable) at the risk of substance, like drone cheerleaders who want to make illegal war and assassinations on the cheap while we sleep, like venture-capitalist "disruptors" peddling the usual right-wing de-regulation, looting of common goods, and valentines to makers-vs-takers wealth-concentration.

Look, I enjoyed the President's attitude and ad libs as much as the next guy. There were edifying passages on fairness and sustainability and diplomacy (most of them contradicted at other points in the speech not to mention by reality). It wasn't a terrible speech, and it had the benefit of being pretty forgettable. As an opening move in the long campaign to put Hillary Clinton with an Elizabeth Warren inflection into the White House the speech wasn't half bad. But as somebody who takes progressive technoscience seriously, I must say the whole speech was stained by a futurology that has no future if we are to any. Hell, by the end I felt it was a mercy we weren't subjected to a paragraph on 3D-printing delivering post-scarcity and the Internet of Things!

Tuesday, January 20, 2015

"Startup CEO" David Levine has the sadz for the recent fall from grace of the awkward gawky panoptic Google Glass. He declares himself "perturbed and puzzled" by the glee with which glassholery's fail was greeted across the internet and "puzzled" again by the claim in industry rag CNET that the withdrawal of the product reflects the recognition by Google that few people seem to want the thing.

I'm puzzled that anybody is puzzled, but then I was never tempted to believe, as Levine did and apparently still does, that "Google Glass was literally the beginning of a revolution not just in the wearables sector but mobile as a whole. The concept was big, bold and brash and captured the imagination of the entire industry."

Of course, no crappy mobile device is "literally... a revolution." Gosh, how I love the crackerjack madness of that "literally."

The futurological derangement of reasonable assessment refuses to consider the actual costs, risks, and benefits of an artifact to the diversity of its stakeholders and the diversity of their wants in the diversity of their situations. The futurological at once re-frames technodevelopmental change from a site of stakeholder struggle to a series of stepping stones aspiring toward The Future, as well as re-directing technodevelopmental reflection from stakeholder deliberation to a consumer fandom providing escapism and promising transcendence.

Nobody ever wore the distracting, straining, uncomfortable, alienating Google Glass because it was useful but only because it enabled a kind of futuristic cosplay in denial about what it was doing while invested in a vision of The Future as drab as it is dystopian.

As a True Believer, Levine has learned from the failure of Glass nothing but that it will eventually triumph, naturellement. Zombie eyes and zombie lies forevs! The inevitable next iteration of the Revolution will, he assures us, be fasionable and respectful of privacy -- or, wink wink nudge nudge, much more "unobtrusive" about what it is and what it is doing.

Monday, January 19, 2015

Stop feeling vindicated, hopeful, or smug about recent Fox News apologies for their surreally idiotic and bigoted reports of "No Go Zones" in Europe. The apologies testify to their recognition that they have opened themselves up to powerful attacks and may divide Republican presidential hopefuls threading the bigoted Base needle in costly ways early on that could stick in the general election -- and by accepting their apology (or letting the matter drop now that they have made it, which amounts to the same thing) we collaborate in ensuring the damage they have done remains in force but also that they don't have to pay for it. Their bigot meme is out there now, and everybody's racist uncle will keep the zombie No Go Zones alive and eating brains from here to eternity. The only way to kill the meme is to subvert it with another meme that redirects the politics elsewhere. "Fox News is the Ultimate No Go Zone" is the meme-disruptor that occurred to me after one second's thought. Since "No Go Zones" seem to amount to ghettoized underserved communities, calling them "Now Go Help Zones" or something like that might be another approach to the rhetoric playing out here. Other proposals welcome.

I agree with those who argue faithful convictions that are not scientifically warranted should not be taught in science classrooms or form the basis of public policy seeking accountable harm-reduction outcomes. I daresay a majority of the people who share my conviction on this score are actually people of some sort of faith or other, even if it also seems obviously true that the vast majority of people who disagree with me are religious fundamentalists.

It actually matters that while science education and public policy should be warranted by scientific criteria, it is also true that faithful beliefs that aren't about facilitating prediction or control but finding one's way to personal legibility, sublimity, or hope, say, need not be warranted by scientific criteria and that their failure to do so actually is not grounds for refuting them, once and for all.

Consistency neither recommends faith -- or any particular faith among the many competing faiths on offer -- nor provides a ground for rejecting faith out of hand. There are other grounds, taste, tradition, vicissitudes of history or personal experience that may do so for some (full disclosure: me included), but they seem to me mostly rather personal.

Certainly I disapprove faith communities that re-write politics in the image of imperial moralizing or science in the image of subcultural signaling -- but I disapprove science advocacy that would reduce aesthetic judgment, moral community, or political reconciliation to its terms for mostly the same reasons.

I'm an atheist -- that is to say I've been a-theist, without god(s), and cheerfully so, for more than thirty years by now -- but the force of my experiences of aesthetic sublimity and of my faith in democratic progress toward equity-in-diversity readily connects me with many who are religious. Again, I'm a atheist, but when atheist advocacy demands scientism or denigrates multiculture or provides a vehicle for racist, sexist or plutocratic reaction I honestly can't say that I feel the remotest connection to those who claim to champion an atheism I share.

Saturday, January 17, 2015

Mehumans and soon-to-be-uplifted Great Apes and Cetacea, rejoice! Randian archetype and scam artist Elon Musk has managed to keep the non-story about keeping the world safe from non-existing Robot Gods alive for yet one more day by promising to devote ten million dollars to "run a global research program aimed at keeping AI beneficial to humanity." Geez, how much money does it take to rent a room for a libertechbrotarian circle jerk, anyway? I do hope these men of Science! and also Ethics! didn't let Elon get away with paying in digital muskcoin. "AI leaders" (what on earth could that possibly mean?) declared the prospect of getting millions of dollars "wonderful" but added that they "deserve much more funding than even this donation will provide." So, keep that collection plate nice and full if you hope to get uploaded as cyberangels in Holodeck Heaven one day, Robot Cultists! "I love technology, because it's what's made 2015 better than the stone age", said MIT professor and Future of Life Institute president Max Tegmark, I guess because maybe he got drunk when he heard about the donation? (That's really a quote, and from his own press release, it's right there if you click the link.) With thought leadership like that at the helm, who can doubt acceleration will keep accelerating, but, you know, ethically and stuff, into The Future of Robots we all want because we're not robots?

The first post arrived last Friday: "Stephen Hawking sees the danger of artificial intelligence. So does Elon Musk. Oxford professor Nick Bostrom, head of the Future of Humanity Institute, has written a whole book about it. Even the scientists at Google DeepMind, who are developing artificial intelligence, seem a little spooked about it... Since it's a new year, and since it's the weekend, why don't we ponder the possibility that sometime in the (relatively speaking) not-too-distant future, our miserable species will vanish."

"The danger." Is there one? "The Institute." Is it Very Serious? "A whole book." Is it worth reading? "The scientists." Scientists, are they? "Are developing." ARE they now?

A fine critique of the arguments and the cultists making the arguments appeared soon (click the link for all of it, I've merely excerpted it), written by a reader who also contributes excellent comments in the Moot here, "Esebian,"

I'm sorry, but you made this blog jump the fucking shark. Valleywag was always about exposing the fraud, the self-aggrandizement and the damages done by SillyCon Valley, but right now you sing along to their marketing department's tune. Because that's exactly what sooper-intelligence and "godlike AIs" really are, publicity stunts. It comes in to flavors: A) rehashed religious paradise fantasies about the New Computer God creating utopia with nanomachines fulfilling all material needs and building sooper-great robot bodies for us to "upload" into and become immortal or B) doom 'n' gloom stories about Skynet nuking the puny hoo-mans. Both of them are supremely masturbatory and completely divorced from reality... The integrity of this blog is at stake! You're playing right into the hands of techbro scam artists and crackpot cultists. Don't let this site be dragged into the transhumanist/Singularitarian mire.

This comment and a couple more exposed the reductive, hyperbolic pseudo-scientific nonsense on which this super-AI fearmongering is premised and seemed to receive fairly supportive responses from Lyons. Fine, then, thought I.

And then three days ago Lyons posted another bite at the super-AI poisoned apple, and although it snarked about some Hollywood types we would be foolish to take seriously on this issue (as apparently against Robot Cultists who are taken quite seriously on this issue instead) the substance of the case for the Very Serious worry about the "existential risk" of super-AI was presented at great length and with little substantial qualification.

Incredibly, a third post on the topic appeared today. The full text: "The battle: Elon Musk versus killer robot with AI brain. On one side, enormous intelligence combined with a completely ruthless, amoral worldview and limitless resources. On the other side, a robot. Who will win?" Of course, one cannot help but provide the action-flick trailer voiceover -- "In a world of killer robots, Iron Man Elon Musk girds his loins for battle!" -- but it intrigues me that behind the implied snark this is essentially nothing but a recapitulation of the straight robocultic line.

As "Esebian" noted, ValleyWag offers quite a lot of critique of tech hyperbole and vacuity as well as documenting the atrocities of celebrity-ceo sociopathy and tech-guru weirdness and vapidity, all of which are only too readily applicable to the topic at hand. This would not appear to be the spirit in which these pieces are being offered up, however, and the example of sister-site io9's regular capture by transhumanoid proselytizing (Dumb Dvorsky, I'm looking at you!) at the expense of the substantive sfnal literary/cultural critique they do pretty well otherwise gives me reasons to worry.

Techno-transcendentalism is the reductio ad absurdum of reactionary corporate-military digi-utopian fraud and plutocratic skim-and-scam -- it amplifies the status quo while pretending to be a radical critique. One hopes the critics and contrarians of ValleyWag retain the sense and standards to grasp the difference, else they will become a promotional rather than satirical response to SillyCon VC sub(cult)ure. If it helps, wags, do read this.

People who flutter their hands over the "existential risk" of the theoretically impoverished, serially failed project of good old-fashioned artificial intelligence (GOFAI) or its techno-transcendental amplification into a post-biological super-intelligent Robot God (GOD-AI) think they are worried about a thing. They think they are experts who know stuff about a thing that they are calling "AI." They can get in quite a lather arguing over the technical properties and sociopolitical entailments of this thing with just about anybody who will let them.

But their "AI" does not exist. Their "AI" does not have properties. Their "AI" is not on the way.

Their "AI" is a bunch of fancies bounded by stipulations. Their "AI" stands in the loosest relation to the substance of real code and real networks and their real problems and real people doing real work on them here and now.

"AI" is a discourse, and it serves a primarily ideological function: It creates a frame -- populated with typical conceits, mobilizing customary narratives -- through which real problems and complex phenomena are being miscomprehended by technoscientific illiterates, acquiescent consumers, and wish-fulfillment fantasists. Ultimately, the assumptions and aspirations investing this frame have to do with the promotion and advertizing of commodities, software packages, media devices and the resumes of tech-talkers. At their extremity, these assumptions and aspirations mobilize and substantiate the True Belief of techno-transcendentalists given over to symptomatic fears of mortality, vulnerability, contingency, error, lack of control, but it is worth noting that the appeal to these irrational fears and passions merely amplify (in a kind of living reductio ad absurdum) the drives consumer advertizing and venture-capitalist self-promotion always cater to anyway.

Actually-existing biologically-incarnated consciousness, intelligence, and personhood look little like the feedback mechanisms of early cyberneticists and less still like the computational conceits of later neurocomputationalists. Bruce Sterling said nothing but the obvious when he pointed out that the brain is more like a gland than a computer. Living people don't look any more like the Bayesian calculators of alienated robocultic sociopaths than they look like the monomaniacal maximizers of political economy's no less sociopathic homo economicus.

So, of course, "The Forbin Project" and "War Games" and "The Terminator" and "The Lawnmower Man" and "The Matrix" are movies -- everybody knows that! Of course, our computers are not going to reach critical mass and "wake up" one day, any more than our complex and dynamic biosphere will do. Moore's Law is not spontaneously going to spit out a Robot God any more than an accumulating pile of abacuses would -- not least due to Jeron Lanier's corollary to Moore's Law: "As processors become faster and memory becomes cheaper, software becomes correspondingly slower and more bloated, using up all available resources."

Again, everybody knows all that. But can everybody be expected to talk or act like people who know these things? Sometimes, the exposure of the motives and hyperbole and deception of AI ideology will lead its advocates and enthusiasts to concessions but not to the relinquishment of the ideology itself. Even if we do not need to worry about making Hal our pal, even if AI will not assume the guise of a history-shattering super-parental Robot God... what if, they wonder, somebody codes some mindless mechanism that is satanic by accident or in the aggregate, like a vast robo-runaway bulldozer scraping the earth of its biological infestation, a software glitch that releases an ubergoo waveform transforming the solar system into computronium for crunching out pi for all eternity?

The arrant silliness of such concerns is exposed the moment one grasps that security breaches, brittle code, unfriendly interfaces, mindless algorithms resulting in catastrophic (and probably criminal) public decisions are all happening already, right now. There are people working on these problems, right now. The pet figures and formulations, the personifications, moralisms, reductions and triumphalisms of AI discourse introduce nothing illuminating or new into these efforts. If anything, AI discourse encourages its adherents to assess these developments not in terms of their actual costs, risks, and benefits to the diversity of their actual stakeholders, but to misread them as stepping stones along the road to The Future AI, signs and portents in which is glimpsed the imminence of The Future AI, thus distracting from the present reality of problems to the imagined future into which symptomatic fears and fancies are projected.

So, too, sometimes the exposure of the irrational True Belief of adherents of AI-ideology and the crass self-promotion and parochial profit-taking of its prevalent application in consumer advertizing and the pop-tech journalism will lead its advocates and enthusiasts to different concessions. Sure, it turns out that Peter Thiel and Elon Musk are hucksters who pulled insanely lucrative skim-and-scam operations over on technoscientific illiterates and now want to consolidate and justify their positions by promoting themselves as epochal protagonists of history. And, sure, Ray Kurzweil and Eliezer Yudkowsky are guru-wannabes spouting a lot of pseudo-scientific pseudo-philosophical pseudo-theological nonsense while looking for the next flock to fleece. But what if there are real scientists and entrepreneurs and experts somewhere doing real coding and risking real dangers in their corpoate-military labs, quietly lost in their equations, unaware that they are coding the lightning that will convulse the internet corpse into avid Frankensteinian life?

Of course, the very robocultic nonsense disdained in such recognitions has found its way to the respectability and moneybags of Google, DARPA, Oxford, Stanford, MIT. And so, to imagine some deeper institutional strata where the really serious techno-transcendental engines are stoked actually takes us into conspiratorial territory rather quickly. Indeed, this fancy is a mirror image of the very pining one hears from frustrated Robot Cultists who know all too well in their heart of hearts that nobody is out there materializing their daydreams/nightmares for them, and so one hears time and time again the siren call for separatist enclaves, from taking over tropical islands or building offshore pirate utopias on oil rigs to huddling bubbled under the sea or taking a buckytube space elevator to their private L5 torus or high-tailing it out to their nanobotically furnished treasure cave -slash- mad scientist lab in the asteroid belt to do some serious cosmological engineering.

Again, it is utterly wrong-headed to think there are serious technical types working on "AI" -- because there is nothing for them to be working on. Again, "AI" is just a metaphorization and narrative device that enables some folks to organize all sorts of complex technical and political developments into something that feels like sense but is much more about wishes than working. The people solving real problems with code and technique and policy aren't doing "AI" and to read what they are doing through AI discourse is fatally to misread them. It is only a prior investment in the assumptions and aspirations, figures and frames of AI discourse that would lead anybody to think one should relinquish the scrum of real-world problem solving and ascend instead to some abstract ideality the better to formulate a "roadmap" with which to retroactively imbue technoscientific vicissitudes with Manifest Destiny or to treat as "the real problem" the non-problem of crafting humanist Asimovian injunctions to constrain imaginary robots from imaginary conflicts they cause in speculative fictions.

You don't have to worry about things nobody is working on. You shouldn't pin your hopes or your fears on pseudo-philosophical fancies or pseudo-scientific follies. You don't have to ban things that don't and won't exist anyway, at any rate not in the forms techno-transcendentalists are invested in. There are real things to worry about, among them real problems of security, resilience, user-friendliness, interoperability, surveillance. "AI" talk won't help you there. That should tell you right away it works instead to help you lose your way.

Wednesday, January 14, 2015

The idea of a ban on "existentially-risky" artificial intelligence -- a term which is concerned with quite a lot of stuff that isn't or wouldn't be intelligent -- is momentarily very much in the news right now (or what passes for news in the illiterate advertorial pop-tech press) due to a recent Open Letter from the Future of Life Institute -- an "institute" which is concerned with quite a lot of stuff that isn't or wouldn't be alive. This Letter happens to be getting a lot of signatures from celebrities and celebrity CEOs, but also some computer scientists who are no more expert than you or me or Alan Alda (who has signed the Letter) to wade into the philosophy of consciousness or personhood at issue.

Actually, many of the signatories to the letter are outright boosters, one might even say dead-enders, for the serially failed project of good old fashioned artificial intelligence (GOFAI), and while much of the public discussion of AI/superAI in these circles is framed in terms of bans, the Letter itself indulges in loose talk of "responsible oversight" of AI. Mostly, this seems to me to involve giving more money and more attention to the people who still take GOFAI seriously. The key folks behind the Letter are techno-transcendentalists explicitly associated with transhumanist and singularitarian and techno-immortalist movements and sub(cult)ures, and it is interesting how rarely even those ridiculing the Letter are pointing out this fact (you will find Nick Bostrom, George Dvorsky, Ben Goertzel, Elon Musk, Jaan Tallinn, Eliezer Yudkowsky all over my Superlative Summary).Would commenters be so reticent to notice were all these figures to happen to be Raelians or Scientologists?

It is a bit demoralizing to find that the public debate on this topic seems to be settling into one between those who say something on the order of "well, some of these extreme arguments seem a bit crazy, but this problem needs to be taken seriously" versus those who ridicule the debate by joking "I for one welcome our robot overlords" and then declaring that when the Robot God arrives we don't stand a chance. In other words, every position concedes the validity of the topic and its essential terms while at once pretending to step back from it. These gestures essentially concede the field to the futurologists and invigorate the legibility of their AI discourse and hence the profitability of the marketing agenda of the tech companies that deploy it, which is the only victory they want or need in any case.

Now, I for one think that there is no need to ban AI/super-AI because our present ignorance and ineptitude form barriers to its construction incomparably more effective than any ban could do. We lack even the most basic understanding of so many of the phenomena associated with actually-existing biologically-incarnated consciousness, affect, and identity, while our glib attributions of intelligence and personhood to inert objects and energetic mechanisms all attest to the radical poverty of our grasp however marvelous our reach. We don't need to get the problem of the Robot God off the table, because there is no Robot God at the table nor will there be any time soon.

I daresay all this need not be the case forever, after all. Perhaps human civilization will one day confront the danger of AI/super-AI, but that day is not soon -- and those who say otherwise seem to me mostly to be laymen in the field of computer science making claims about the state of the art for which they are unqualified, or computer scientists making philosophical arguments in ways that reveal little philosophical rigor or historical awareness.

There is no reason to think that a sensible assessment of the state of the art in computer programming here and now would undermine reassessment in the future should our models and techniques improve. Indeed, there is every reason to think, to the contrary, that premature concern from our limited perspective will introduce false formulations and figures the legacy of which might interfere with sensible deliberation later when it is actually relevant.

To repeat: I think it is extremely premature to deliberate over banning or regulating non-existing nor soon-to-exist AI/superAI here and now; and, if anything, to do so is more likely to undermine the terms of such deliberation should it eventually become necessary. My critique does not end there, however, since this utterly unnecessary and premature and eventually possibly damaging AI/superAI deliberation here and now is happening nonetheless, and seems to be attracting greater attention, and so does have real effects in the world even without any justification on its own terms or real objects of concern.

This takes me to a critical proposal at a different level: namely, that the time and money and the conferral of authority on "experts" devoted to the "existential risk" of unregulated/unfriendly AI/superAI functions to divert resources and attention from actual problems and actually relevant experts, and indeed is sometimes mobilized precisely to trivialize urgently real problems (as the increasingly influential Nick Bostrom's worries about AI are directly connected to a rejection of the scope of anthropogenic climate change as a public problem, for example).

Returning to the Letter's recommendation of "responsible oversight," consider this paradoxical result: nobody can deny that there are incredible problems and enormous risks associated with the insecurity of networked computers and with user-unfriendliness of programs and with the dangerous political consequences of substituting algorithms for judgments about human lives. Such questions are usually not the focus of the futurological discourse of AI/superAI, and usually serve at best as dispensable pretexts or springboards for heated "technical" discussion debating the Robot God odds of robocalypse or roborapture. Indeed, it is one of the more flabbergasting consequences of AI/super-AI discourse that they not only distract from actually real problems of computation, but that AI/superAI discourse becomes a distortive lens of false and facile personifying figures and moralizing frames that confuse the relevant terms and stand in the way of deliberation over the problems at hand.

Incredibly, if AI/superAI eventually does become a matter of real concern in anything remotely like the terms that preoccupy futurologists I would say we will be better prepared to cope with it through ongoing and gathering practical experience with actual coding problems as they actually exist, than ignoring reality and instead imagining idealized future machines from our present, parochial, symptomatic perspective.

The primary impact of AI/superAI discourse as it ramifies in the public imaginary has been instead to denigrate human intelligence as it actually exists: calling cars "smart" to sell stupid unsustainable car-culture to consumers, calling credit cards "smart" to seduce people into ubiquitous surveillance the better to harass them with targeted ads, and to rationalize crappy "AI" software like autocorrect and crappy computer-mediated "smart" analyses like word-clouds, and crappy "decision" algorithms to determine who gets to start a business or who gets to be extrajudicially murdered by a drone as a potential "terrorist." As always, talk of artificial intelligence yields artificial imbecillence above all.

AI discourse in its prevalent forms solves no real problems, is not equipped to deal with eventual problems, and functions in the present to cause catastrophic problems. It seems to be of use primarily as a way to promote crappy computation for short-term plutocratic profit.

It is no surprise that this shortsightedness is what futurologists and tech-talkers would peddle as "foresight."

I find it impossible to believe that Mitt Romney is really making a third bid for the presidency, and assume this is some sort of gross rich white dude pissing contest with Bush over who is the still-is and who is the has-been in the Greedy Olds Party or whatever, but I honestly found it pretty impossible to believe he was making the second bid he was actually making when he was doing that either, so who knows? Certainly I never thought there would be any excuse at all to remind anybody of my fable, The Artificial Man the Killer Clowns Made and the Mouse Child Who Said What She Saw, but here we are and here it is.

I'm a lacto-ovo vegetarian now, but obviously in The Future will be a digi-nano vegetarian...

Salon has alerted me to the existence of a new SillyCon Valley startup, Project Nourished, which hopes to use synesthetic cues from a virtual reality helmet, vibrating spork, and whiffs from a perfume atomizer to fool America's obese malnourished gluttons that they are feasting on two-pound steaks and baskets of onion rings and death by chocolate sundaes when in fact they are eating gelatinous cubes of zero-calorie vitamin-fortified goo.

According to the breathless website, this proposal will "solve" the following problems: "anorexia, bulimia, cancer, diabetes, heart disease, obesity, allergies and co2 omissions."

The real problem solved by the project is that it definitively answers a question I have long pondered: Is futurology so utterly idiotic and smarmy that it is actually impossible to distinguish its most earnest expressions from even the most ridiculous parodies of them?

I mean, to literally name your project "nourish" while actually avowing you seek to peddle a product that nourishes no one is pretty breathtaking. It's like the scam of peddling sugary cereals as part of "this complete nutritious breakfast," when all the nourishment derives from the juice and eggs and toast accompanying the bowl in the glossy photo but almost never in the event of an actual breakfast involving the cereal in question. Except now, even the cereal isn't really there, but a bowl of packing cardboard over which is superimposed an image of Fruit Loops with a spritz of grapefruit air-freshener shot in your nostril every time you take a bite.

Why ponder structural factors like the stress of neoliberal precarity or the siting of toxic industries near residences or the lack of grocery stores selling whole foods within walking distances or the punitive mass mediated racist/sexist body norms that yield unhealthy practices, eating disorders, the proliferation of allergies and respiratory diseases and so on? Why concern yourself with public investment in medical research, heathcare access, vegetarian awareness, zoning for walkability, sustainable energy and transportation infrastructure and so on?

The Very Serious futurologists have a much better technofix for all that -- it's kinda sorta like the food pills futurologists have been promising since Gernsback, but now you would eat large empty candy colored polyhedra (you know, like the multisided dice nerds used to use to play D&D in the early 80s) while sticking your head in a virtual reality helmet (you know, like the virching rigs techbros have been masturbating over since the late 80s). Also, too, the stuff would be 3D-printed, because if you are a futurologist you've gotta get 3D-printing in there somewhere. As I said, Very Serious!

Returning to the website, we are told, "the project was inspired by the film Hook, where Peter Pan learns to use his imagination to see food on a table that seemed completely empty at first." Setting aside the aptness of drawing inspiration from a crappy movie rather than the actual book on which it is based -- only Luddites think books have a future, shuh! -- I propose that Project Nourish has a different filmic inspiration:

Saturday, January 10, 2015

A reader in the Moot describes some typical transhumanoid versions of "doing radical social criticism... saying
something along the lines of, say, gender won't matter anymore when we
upload our minds to the noosphere." For transhumanoid radical race critique fill in the blank (and try not to think too much about the history of eugenics, or how transhumanists seem to be a whole lot of white guys), for transhumanoid radical class critique here comes NanoSanta Clause.

Of course, not only is this not "doing
radical social criticism" but it seems to me pretty explicitly
straightforwardly reactionary, even when accompanied by citations of actual feminist, queer, or anti-racist criticism. Complacent consumers who want to enjoy a little liberal guilt to spice
their entertainments will always rationalize the violence
and inequity of the present by declaring the debased now
better than before or on the
road to better still and then grabbing a beer from the fridge, or
clicking the buy button, or getting out on the dancefloor.

Plutocrats always naturalize their
hierarchies as meritocracies. In much the same way, the whole robocultic uploading schtick is
obviously a denigration of materiality of the body, and it is always of
course the white body male body straight body cis body healthy body capacious body that can best disavow its
materiality because its materiality isn't in question or under threat,
right?

It can be a mark more of privilege than perceptiveness to call
into question that which won't ever be in question for you whatever. The bodily is always constituted as such through technique (from language to body language to posture to wearability), and the social legibility of every body is of course performatively substantiated. To grasp that point is to trouble or question the prediscursivity of the body or to recognize that prediscursivity is always a discursive effect. But this recognition is at best a point of departure and never the end-point for the interrogation of prevailing normative bodies and their abjection of bodily lifeways otherwise.

The denial or disavowal of differences that make a difference is much more likely effectively to endorse than efface them. Imaginary digi-utopian and medi-utopian circumventions of raced, gendered, abled bodily differences function in the present to deny or disavow rather than critically or imaginatively interrogate their terms. These omissions are all the more egregious when we actually turn our minds even cursorily to the perniciously raced and sexed histories of the medical and the digital as actually-existing practical, normative, professional sites.

Setting aside questions of the utter implausibility and incoherence of the techno-transcendental wish-fulfillment fantasies playing out in all this, why even pretend that recourse to digital dematerialization or to medical enhancement would circumvent rather than express the fraught, inequitable legibility and livability of wanted lifeway diversity? It will surely be the more urgent task to attend closely to the ways in which these very differences, race, sex, ability, shape the distribution of costs, risks, benefits, access and information to actually-available prosthetic possibilities.

I must say it has always cracked me up that since all information is
instantiated on a material carrier, then even on their own terms the
spiritualization of digi-info souls is hard to square with the
reductionist scientism these folks tend to congratulate themselves over
-- not that it would be anything to be proud of even if they managed to
be more consistently dumb in that particular way.

What can you really expect from techno-transcendentalists apparently so desperate not to grow old or die that they will pretend a scan of them would be them when no picture ever has been and that computer networks could reliably host their "info-souls" forever when most people long outlive their crufty, unreliable computer networks in reality, and all just so they can day dream they will be immortal cyberangels in Holodeck Heaven? Science!

Treating certain acts of public violence as terrorism rather than criminality always seems to
derange discussions both of what happened and what should be done.

What does it mean when people are more afraid of, or at any rate more exercised by, comparatively rare incidences of terrorist violence than they are comparatively more commonplace incidences of criminal violence? What does it mean when responses to terror undermine definitive civil liberties and utterly scramble budgetary priorities on the spur of a moment of public panic while responses to generations of inequitable policing and punishment move at a snail's pace despite long-plummeting crime rates and longstanding community protests?

I realize, of course, terrorism seeks to provoke political responses as much criminality does not, but it matters that what follows from making
a distinction of terrorism from criminality based on this recognition tends to facilitate precisely those sorts of responses that the terrorists are seeking.

Branding violence as terror is itself terrorizing, terrorism is substantiated as such through the collaboration of the majority in the terms of a marginal minority: it amplifies a marginal threat of violence into an existential threat to civilization, it amplifies a brainwashed tool into a protagonist of history.