Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Thursday, August 21, 2014

Scattered Speculations on a Twitterscrum with Robot Cultists Andre@ and Rachel Haywire

I have always found it curious the way, in the give and take of a realtime argument, techno-transcendentalists cheerleading about mind-uploading info-soul immortalization schemes and about coding history-ending superintelligent Robot Gods and the like, cannot help but crow whenever we arrive at the inevitable point in the argument when I point out that I am not a scientist. For a fairly representative example of the phenomenon, from a twitter spat I found myself in with a Robot Cultist last night, observe:

@dalecarrico Perhaps the person who just conceded not being a scientist should refrain from making such strong claims.

I am, of course, a philosopher and rhetorician by training, and never ever pretend otherwise in the least. Since so much of the force of futurological discourses depends on their recourse to metaphors, hyperbolizations, reframing commonplaces as novelties, naturalizations of contested terms, distractions from rather than solutions to conspicuous problems, consoling subcultural signaling and appeals to identity, and so on, it has always seemed to me that my training provides a useful critical perspective rather than a disqualification. The cackling delight with which my status as a non-scientist gets trotted out by the futurological faithful who declare me incapable of engaging in the relevant "technical" specifications endorsing their triumphalism is all the more bizarre given how many of them are no more practicing scientists than I am. Time and time again, a query for their science degree, current lab, or published papers goes unanswered or reveals I am in the presence of yet another coder fancying themselves an honorary biologist, plasma physicist, civic engineer, and political economist as a result. Last night's interlocutor was just such a coder. From my Futurological Brickbats: LXX. I know enough to know I don't know enough to be a scientific authority, while futurologists know enough to know that most people don't know enough to know the difference when they pretend to be scientific authorities.

Beyond this deceptive and also probably self-deceptive gambit, I also have to say that there is something that feels to me not only pseudo-scientific but actively anti-scientific in the Wall of Words partisans of cryonics and uploading and drextechian nano-abundance and the rest like to fling up in the name of "the technical discussion" to silence criticisms of conceptual and otherwise rhetorical sleights of hand on which their rationalizations tend ultimately to depend. Confronted with a critic who exposes the fairly conspicuous religiosity of their fervent assertions about the techno-transcendental arrival at immortality as info-spirit-selves in Holodeck Heaven under the ministrations of a post-biological post-parental superintelligent Robot God and with omni-competent nano or femto matter-mulching Anything-for-Nothing machines at their every whim's disposal, these faith-based futurologists like to retreat as quickly as possible to the prosaic. Cryonicists start lecturing you about the harmless revival of the drowned and of organs cryopreserved for transatlantic treks to surgery, nano-cornucopiasts handwave about the productive factory floor of the molecule, SENS longevists blather on about the new car smell of a century old roadster repaired and maintained by a loving hobbyist, AI-deadenders keep winning Chess and Jeopardy with glorified abacuses with database access, and on and on and on.

Of course, quite a lot of the science and technique these futurologists are drawing on argumentatively is perfectly well warranted as far as that goes. As a matter of fact, my impression is that most of the science the priestly experts of the Robot Cult archipelago lean on amounts to fairly undergraduate tech talk, sound as far as it goes but never particularly advanced. And their preferences in the matter of the "advanced" tend to incline more in the direction of the Aquarian, I find, their cutting-edge looks to be rather, er, cosmic.

Let us delve deeper into an aria offered up by my interlocutor last night. First, read through the twitter scroll, and then my reading and response will follow. (I am fairly confident, by the way that "Andre@" would regard this very sequence as their strongest most triumphant portion of the debate. This selection is not offered up in an effort to ridicule through expurgated editorial shenanigans on my part, and I do hope none of the directly interested parties would perceive otherwise. The tweets are clickable and fuller reconstructions of what was a much longer and ramifying twitterscrum should be possible for the diligent):

@dalecarrico Thought processes supervene on a physical substrate - if you reject that, you're no materialist.

Got that? You will notice that my "strong claim" is the suggestion that, given all the questions we have about the relations of brain processes to the phenomena we describe as "intelligence" and "mind," modesty may be more warranted than declarations of certainty that software minds indistinguishable from human minds are obviously possible and immortalizing uploads of info-selves on the horizon no less obviously. I am someone who celebrates science as much as the next geek, but I do think our discoveries raise more and more questions rather than providing rationalizations for faith in wish-fulfillment fantasies. Notice, I am explicitly materialist in these exchanges in a way that leads me to think it probably actually matters that what we mean by minds in the real world have always been specifically materialized in biological brains and social formations and to think we should qualify, to say the least, expectations that non-biological non-social materializations will be "indistinguishable" from human minds or even intelligibly described as "minds" at all. I am not the one blathering on about superintelligent AIs, info-souls, cyberangel avatars and so on. But presumably I am the one indulging in "bullshit argument by assertion"? Presumably I am the one "desperately grasping at magic pixie dust"?

I am far from denying the warranted assertions my interlocutor breathlessly exhaled in the Wall of Words made to loom before me last night, tweet by tweet, block by block. Indeed, most of the science scribbled on the Wall is well-worn enough that for all I know it was being read off the promotional descriptions on the back of a set of Cosmos blu-rays (which I own myself, by the way, despiser of science that I am). As I have said, futurologists tend to retreat in such moments to fairly undergraduate science in performing their technical preening acts. The rhetorician in me cannot help but notice that the argumentative force of the tirade does indeed derive in important part from the illustrative scenery painting of figures -- "supervene" in the first one, "fix[ation]" in the next, "computab[ility]" in the next, "extrem[ity]" in the next, and so on. The definition of materialist in the first post is idiosyncratic in the extreme, and hardly dispositive. Brazening it out nonetheless is something a rhetorician can appreciate as commonplace, needless to say. However warranted the string of observations following, there is nothing in what we are well warranted to believe we know in them to warrant the further declarations that "behavior... *is fixed by known physics* -- there is *nothing* [emphasis in the original, but I would add it if it weren't there --d] mysterious or unknown about the behavior" or that our knowledge as it is renders assertions about mind-uploads "perfectly [emphasis added --d] justified" or that "[t]he unknowns in physics are all [emphasis added --d] under extreme conditions" (famous last words) or that "[t]he only [emphasis added --d] thing that matters under the conditions that occur in the brain is ordinary" as we conceive it, and so on. The criteria on the basis of which we select as warranted the beliefs that would yield prediction and control are always defeasible and never provide grounds for the unqualified superlatives of "only" "all" "nothing" "perfect" that freight the discourse of the faithful far more than the scientific.

One of the reasons that vanishingly few actually qualified, actually practicing scientists in the actually relevant fields associated with the confident super-predicated assertions of futurologists will have anything whatsoever to do with these superlative futurologists is that their robocultic tech talk is too rudimentary to be of much interest to scientists while the spirited projections where all the robocultic action is are far too wild and wooly and unwarranted for them to take seriously. Contrary to the insistence of cryonicists and mind-uploaders who decry the corpse-coddling "deathism" and "sheeple" timidity of those who dare not Challenge! Death! (those who, you know, recognize the fact that all humans are mortal and that death denialism may yield an irrational death in life but will not render the spellcaster immortal in fact) the reason biologists and gerontologists and lab techs administering diagnostic brain scans aren't in the futurological megachurch pews is that there simply is a whole hell of a lot of distance between where we are and where we would have to be to begin even to contemplate modest variations on superlative futurological aspirations.

Again, of course it is true that there are enormously interesting problems and possibilities for better sensors and materials in biochemistry; and of course it is true that there are ferocious hopes and fraught hurdles for better therapies in brain diagnostic media and organ cryopreservation and gene therapies; and of course it is true that planetary digitally networked data framing, surveillance, marketing, and finance introduce extraordinary dangers of error and attack and crucial demands for accountability and user-friendliness for software designers, and so on. Although Robot Cultists retreat to this register to ground their wish-fulfillment fantasies in something like an everyday "reality effect," it is crucial to recognize that no futurologist qua futurologist has ever made a problem-solving contribution at this level of technicality (it could happen accidentally or incidentally, I suppose).

The substance of futurology consists in its reframing of such problems and accomplishments as stepping stones along a path to super-predicated capacities providing personal transcendence. This, in turn, is simply a reductio ad absurdum or amplification into the cadences of outright religiosity of the already prevalent deceptions and hyperbole of advertizing norms and forms as well as the ideology promulgated by self-esteem pop psychology for the consumer masses and management seminars for the actual and aspirational venture capital/"creative" class minority. Age Defying Skin Kreme! Find Your Inner Winner! Grasping the nature and consequences of these formations depends far less than you might expect on technical debates over the scientific claims on which Robot Cultists pin their hopes (especially since futurologists will tend to retreat to the warranted in such debates, disavowing the hyperbolizations which really substantiate their distinctive claims, making these discussion exactly as relevant and decisive as technical debates among monks over angels cavorting on pinheads) and benefit far more than you might expect on the expertise of literary and cultural critics and ethnographers who are more familiar with the actual dynamisms playing out in futurological discourses and sub(cult)ures.

It is not a scientific but an altogether rhetorical production to try to create the efficacious impression that it is not the one who affirms the warranted in a qualified and contextual way who supports the scientific but instead the one who leaps from the warranted into the superlative who so supports it. To declare modesty assertive, and the refusal of wish-fulfillment a belief in magic requires something of a bravura rhetorical operation, reminding one not least of the dynamics of the Big Lie. Needless to say, it is the one who makes the extraordinary claim who is required to provide extraordinary evidence in support of it. But beyond this, it is not the one who indulges in the superlative rather than the warranted who gets to determine what claims actually are the extraordinary ones and what evidence is extraordinary enough to support them. It is not for Robot Cultists to tell me that their marginal and unqualified assertions are the ordinary ones and that the burden of proof for the support of qualified, contextualized, modest warranted assertibility falls on me because mine is the extraordinary position, that my skepticism of their magic is the magical thinking. Cultists ALWAYS seem to think their articles of faith are commonsensical and undeniable. This sort of facile abuse is hardly unprecedented.

Quite a few Robot Cultists are crowing (although some are doing so in an ironic way meant to cover all their bases in case the verdict changes) about how I "lost" the battle with my robocultic interlocutor last night. I cannot say I know exactly what "winning" or "losing" such an exchange would actually mean. Certainly nothing particularly unexpected happened for someone who has engaged in too many exchanges of this sort over the years to count them. The debate such as it was seemed to me interestingly representative, and worthy, as you see, of a closer reading. In such matters I suppose that winning and whining can become rather hard to distinguish sometimes.

13 comments:

> I also have to say that there is something that feels to me> not only pseudo-scientific but actively anti-scientific in the> Wall of Words with which partisans of cryonics and uploading> and drextechian nano-abundance and the rest like to fling up> in the name of "the technical discussion". . .

Sadly, a lot of the underpinnings of transhumanism are basedon a sort of blind-men-at-the-elephant thinking—people assumingthat because it can be imagined, it must be possible. Transhumanismis particularly associated with figures in computer science,which is a field that is in some ways more math and art than atrue experimental science; as a result, a great many transhumaniststend to conflate technological advancement with scientific advancement;though these two things are intimately related, they are separatethings. In fact, though transhumanists strenuously deny it, agreat number of their arguments are strongly faith-based — theyassume because there are no known barriers to their pet development,that it's inevitably going to happen. Seldom is the issue ofunknowns — known or otherwise — factored into the predictions. . .

The example of the singularity is instructive. . .[S]ingularitarians hit the wall when confronted with the realitiesof brain development research — though a true AI may in fact be possible,there simply is not enough known about the brain to understandits functions to the degree necessary to create a workable emulation,meaning a prediction of such a creation is meaningless at best,dishonest at worst. . .

No science necessary

Worst of all, some transhumanists outright ignore what peoplein the fields they're interested in tell them; a few AI boosters,for example, believe that neurobiology is an outdated sciencebecause AI researchers can do it themselves anyway. They seemto have taken the analogy used to introduce the computationaltheory of mind, "the mind (or brain) is like a computer."Of course, the mind/brain is not a computer in the usual sense.Debates with such people can take on the wearying feel of adebate with a creationist or climate change denialist, as suchpeople will stick to their positions no matter what. Indeed,many critics are simply dismissed as Luddites or woolly-headedromantics who oppose scientific and technological progress.=====

> The only thing that matters under the conditions that> occur in the brain is ordinary electromagnetic interactions.> The next leading contribution would be the weak force.> People actually do [model] that [on computers] - google> electroweak quantum chemistry. It leads to corrections> *twenty decimal places out* in binding energies for> L vs. D enantiomers. That's perfectly computable too,> but if the working of the brain depended on anything that> small, the noise at 298 K would be incompatible with functioning.

http://news.nationalgeographic.com/news/energy/2012/04/120430-titan-supercomputing-for-energy-efficiency/---------------[As of 2012, t]he problem of improving upon the 150-year-old internalcombustion engine is so complex that the scientists who work onit are eager for a major development in the supercomputing worldto occur later this year. The U.S. Department of Energy's Oak RidgeNational Laboratory (ORNL) in Tennessee is set to deploy a massiveupgrade to Jaguar, the nation's fastest supercomputer andNumber 3 in the world. The new system, called Titan, is expectedto work at twice the speed of the machine that is currently thefastest supercomputer in the world, Japan's K computer. . .====

And note that that's a computational model specifically designed to provideinformation about how to design **actual** engines to bemore fuel-efficient (or whatever) -- nobody expects to beable to put a Titan into a Subaru and drive it around town.

So "exascale" computing will be able to tackle the problem ofsimulating a human brain, you may say (even though nobody quite knowswhat a complete simulation would have to entail). Well, holdonto your hats, global warming fans:

http://www.theregister.co.uk/2012/07/11/doe_fastforward_amd_whamcloud/---------------The U[ltra]H[high]P[erformance]C[computing]program was announced in March 2010 with the goal ofcreating an HPC system that by 2018 can do 50 gigaflopsper watt (BlueGene/Q, the current top performer andmost efficient super in the world, can do a little morethan 2 gigaflops per watt) and pack 10 petabytes ofstorage and do around 3 petaflops of number crunching. . .within a 57 kilowatt power budget.

Building an exascale system would seem easier, by comparison,since there is, in theory, no limit on the size of the machineor its power budget. But in reality, there are big-time powerlimits on exascale supers because no one is going to builda 20 megawatt nuclear or coal power station to keep one fedand cooled. . .

On a current petaflops-class system today, it costs somewherebetween $5m and $10m to power and cool the machine today, andextrapolating to an exascale machine using current technology,even with efficiency improvements, you would be in for$2.5bn a year just to power an exascale beast and you wouldneed something on the order of 1,000 megawatts to power it up.That's 50 nuclear reactors, more or less. The DOE has set atarget of a top juice consumption at 20 megawatts for anexascale system. . .====

Twitter is not the place to have real scientific conversations. However, from these micro-blatherings I conclude that neither Vulnerata nor Haywire are acquainted with real biology. Furthermore, I suspect that both resort to the gambit of "If you don't believe the brain is a computer, u r a dualist!"

> Yes, I know, I know. I shouldn't have bothered to comment on Carrico's> crap. He's consigned me to his imaginary "Robot Cult" (a phrase he repeats> over and over and over again like... a robot) and won't actually ever> engage in productive discussion. Anyone who has read even a few pieces by> him knows what a nasty piece of work he is. As a professional rhetorician,> he's much more interested in looking clever and putting down his opponents> than seeking truth.

It's useless to wrestle with a pig. You both get dirty, and the pig enjoys it.====

> Athena Andreadis said...>> Twitter is not the place to have real scientific conversations.> However, from these micro-blatherings I conclude that neither> Vulnerata nor Haywire are acquainted with real biology. Furthermore,> I suspect that both resort to the gambit of "If you don't believe> the brain is a computer, u r a dualist!"

Who, BTW, is this Andre@ (@puellavulnerata)?

http://charon.persephoneslair.org/~andrea/------------------I'm a software developer for the Tor Project.

I've written mpkg, a minimalist package manager for *nix systems.

I have a version of the Cyclades PC-300 T1 card driver patchedto run on sparc64 kernels.

Where does this belief that uploading is about isolating some sortof abstract, ill-defined 'mind' from the meat arise? It's aboutemulating the brain and whichever other parts of the meat prove essentialto its function in software. I don't intend on being disembodied orunemotional or whatever other reason vs. passion (or is that anill-fitting disguise for masculine vs. feminine?) false dichotomysomeone projects onto the idea; I intend to experience an improved,engineered body, whether as a physical object or an entity in avirtual world, not subject to decay or disease or any of the flawsthat go along with meat, and I most certainly intend on retainingall the emotions I have now.

This essay seems to be regarding uploading as a sort of continuation ofthe rather hoary trope of the emotionless being (Spock, Data, and so on...). . .====

Indeed, there are a lot of assumptions lurking behindvarious SFnal and transhumanist fantasies about AI thatcan usefully be explored. It's also worth nothing thatthe earliest speculations about AI (and the field of "cognitivepsychology" that flourished in the wake of behaviorism'sdemise) hinged very much on "hopesof being able to "isolate... some sort of abstract...'mind' from the [brain]".

Loc. cit.------------------Consider an atom by atom simulation of a whole human bodyand brain - either it produces the same sort of behavior asa physical human, or you're postulating that atoms in ahuman body follow different physics than atoms outside it,which amounts to vitalism or interaction dualism. . .

In the end, if you accept that such an atom by atom simulation wouldactually be conscious, then we are no longer arguing about whetheruploading is possible, merely about how difficult it is.====

That atom-by-atom simulation of a whole human body is presumablyinteracting with an atom-by-atom simulation of a whole externaluniverse (including a few billion other atom-by-atom simulationsof people and other living things). Wow.

> http://www.infowars.com/tor-developer-suspects-nsa-interception-of-amazon-purchase/--------------Andrea Shepard, a Seattle-based core developer for the Tor Project,suspects her recently ordered keyboard may have been interceptedby the NSA. . .

Instead of shipping straight towards Seattle from the Amazonstorage warehouse in Santa Ana, California, Shepard’s packagemade its way clear across the country to Dulles, Virginia. Jumpingaround an area deep inside what some privacy experts refer to asAmerica’s “military and intelligence belt,” the package wasfinally delivered to its new endpoint in Alexandria. . .

Given the NSA’s deep interest in Tor, a popular onlineanonymity tool, some speculate Shepard’s keyboard couldlikely have been implanted with a TAO bug known as “SURLYSPAWN,”a small keylogging chip often implanted in a keyboard’scable. According to NSA slides, a bugged keyboard canbe monitored even when a computer is offline. . .====

I still think she's lost her way, captured by deranging computational figurations of ontology and consciousness, but on the NSA business I daresay she's probably on to something. Our "intelligence" services may have changed the name but they never gave up the dream of Total(itarian) Information Awareness.