Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Monday, October 18, 2010

Must I Really Weigh In On "The Cult Debate"?

For me personally, it is hard to imagine a more surreally irrelevant distraction from the substance of my critique of superlative futurology than debating whether or not my derisive use of the phrase "Robot Cultists" to describe superlative futurologists is strictly correct according to somebody's dictionary definition of what a "cult" is. I have pointed out that I could always use, after all, the less concise but to me roughly synonymous phrase for "Robot Cult" instead "defensive-evangelizing-sub(cult)ural-membership-formation-organized-around-highly-marginal-but-strongly-held-ideological-beliefs-involving-personal-and-historical-techno-transcendence-which-are-expected-to-sweep-the-world-lead-by-would-be-gurus-few-of-whom-are-known-outside-the-sub(cult)ure-itself-but-is-not-a-cult-according-to-the-letter-of-your-dictionary-definition-so-stop-saying-that!" But would the Robot Cultists really like that any better, I cannot help wondering?

Sometimes I find it difficult to determine whether an interlocutor's turn to this (to me) rather trivial non-question is a result of an unserious person literally incapable of taking my serious questions seriously, or an effort at distraction on the part of an organizational opportunist trying to divert attention away from a threat in a fairly obvious PR move, or simply the sort of thing that happens when perfectly likeable but earnestly dull people don't know exactly how to deal with substantive critiques that happen to be sprinkled with little bits of irony and facetiousness and wit and are therefore a little harder to read than is, say, People Magazine.

Were the Extropians a cult? Is cryonics a scam? Are singularitarians a Kurzweil fandom or engaged in some geek headgame variation of a kind of silly eXtreme sport for boys? What about people who call themselves transhumanists, who declare themselves to be part of a "movement," to have a transhumanist "identity," some of whom are literally members in "transhumanist"-identified membership organizations and so on? Are they more like a science fiction fandom for folks who prefer the quasi-nonfiction futurist subgenre of science fiction? or more like members of a marginal not-particularly-coherent fledgling school of philosophy? or a noisy flashy sub(cult)ure that has attracted attention from mainstream media outlets out of proportion to its size? or an ideology trying to make a political movement or a political party but just unusually inept in these efforts? or a marketing scheme for a handful of wannabe gurus slash public intellectuals?

Are the ferocious fans of Ayn Rand's screeds and romance novels strictly speaking a cult, given their ongoing organized existence and annoying inability to talk sense? Are Scientologists still a cult once they have arrived at a certain number of adherents and garnered a certain amount of real estate and legal resources? If yes, is Mormonism a cult, if no, is Mormonism a cult? What about rabid pop fandoms and online conspiracist sub(cult)ures? What are they and is transhumanism whatever it is that they are?

These questions are all interesting questions, I suppose, but I can't say that these are the questions about superlative futurology as a discursive phenomenon to which I have devoted the lion's share of my own critical attention. A debate about none of them would provide the grounds for a substantive response to my critiques of superlative futurology as far as I can tell.

I do think there are things about especially organized transhumanist discursive formations which get a little bit culty, certainly enough so to upset (in a good way, to my mind) especially the real True Believer types or defensive organizational figures who tend to gravitate into conversation with me here on this blog. To be honest, it's hard for me to see how a sub(cult)ural ideological futurological formation freighted with explicit promises of personal and historical transcendence (even if "techno-transcendence") is not going to have some culty paraphernalia about it, after all, especially to the extent that it remains marginal and defensive, as the transhumanists-singularitarians-technoimmortalists-etalia certainly all are. If pointing out that obvious sort of thing freaks the Robot Cultists out, so much the better.

But setting all that aside, for the moment, it should be plain to the meanest intelligence devoting any time or attention at all to my many critiques of superlative and sub(cult)ural futurological formations (both organized and discursive), many of which are both topically and chronologically archived at the sidebar for anybody who actually wants to know what it is they are talking about if they are excoriating me for my so-called distortions and dishonesties, I tend to say a few basic things, over and over again:

First of all, I describe futurological marketing and promotional discourse as the prevailing, definitive discourse of contemporary capitalism in what is otherwise described as its current neoliberal/neoconservative corporate-military developmental-networked mode, and I declare that superlative futurology is most usefully understood as an especially illustrative and structurally clarifying extreme set of variations on -- or symptoms of -- that prevailing or mainstream futurology.

In the introduction to the Superlative Summary (the most sprawling -- also, admittedly, daunting and, after all, sometimes repetitive -- chronological archive of my critiques of superlative futurology over the years) I write, for example, that "[t]here is considerable overlap between… mainstream and superlative futurological modes, [since] both share a tendency to reductionism conjoined to a (compensatory?) hyperbole bordering on arrant fraud, not to mention an eerie hostility to the materiality of the furniture of the world (whether this takes the form of a preference for financialization over production, or for the digital over the real), [as well as] the materiality of the mortal vulnerable aging body, the materiality of the brains, vantages, and socialities in which intelligence is incarnated, among many other logical, topical, and tropological continuities."

In a piece I posted just yesterday, I made (yet again) the second, substantial claim that recurs in my actual critique:

[W]hatever its insistent but superficial scientificity, the substance and primary work of superlative futurology remains, as it always has been primarily:

one -- either ideological, consisting in prophetic utterances in the form of hyperbolic threat/profit assessments and marketing/promotional discourse wrapped in superficially technoscientific terminology providing incumbent-elite corporate-industrial interests rationales to justify continued profit-taking at the expense of majorities

two -- or theological, consisting in priestly utterances in the form of apocalyptic warnings of looming total catastrophes but also promises to the faithful of a techno-transcendence of mortality via super-longevity, error and humiliation via super-intelligence, and stress and worldly defeat via super-abundance providing both reassurance and consolation especially in the midst of the economic and ecologic distress of neoliberal-neoconservative technodevelopmental planetary precarization.

To return yet again to my Introduction to the Superlative Summary, I elaborate this second substantive point there as well, saying:

The characteristic gesture of superlative, as against mainstream, futurological discourses will be the appropriation of worldly concerns -- such as the administration of basic healthcare, education, or security, say -- redirected (in a radically amplified variation on conventional marketing and promotional hyperbole) into a faith-based discourse promising not just the usual quick profits or youthful skin but the promise of a techno-transcendence of human finitude, a personal transcendence modeled in its basic contours and relying for much of its intuitive plausibility on the disavowed theological omnipredicates of a godhood (omniscience, omnipotence, omnibenevolence) translated instead into pseudo-scientific terms (superintelligence, superlongevity, superabundance).

Again, I can see how a discussion of the relative cult-likeness or not of the various sects or flavors or genres of transhumanist-singularitarian-technoimmortalist-nanocornucopiast-geoengineering discourses, organizations, subcultures, whatever might lead us to nibble around the edges of some of my actually stated concerns about superlative futurology, but, frankly, it is hard to see how an exclusive or sustained focus on the cult debate is anything but a failure of intelligence, honesty, or nerve. As I said yesterday, I continue to welcome any serious engagement with my actual critique and especially welcome evidence of the dishonesty and distortion I regularly get accused of by some of the most foolish and most culty of the Robot Cultists (insert longer, unwieldy but just as damning phrase provided above here if so inclined, it makes no difference to me) in the Moot.

39 comments:

I wonder if even the best superlative futurologists really are too stupid and/or too dishonest to engage with this post on its terms. I hope not, I expect so. Each day they refuse to engage, I will post crickets chirping here. If they do make the attempt and their efforts are ridiculous, they should anticipate exposure to ridicule. I am truly and earnestly eager to be shown wrong that these are the only two non-responses I am likely to receive.

It's a difference of style. In science, words have precise meanings and you don't use them loosely. When a New Age type talks about "energy" or "vibrations," they can't really define what they mean by that. But in science, you must precisely define (and even be able to quantify) the many forms of energy.

You have a background in rhetoric, so you use words in different ways, and for different purposes, than scientists. Your long time readers understand that. Someone like Ben doesn't.

So, when Giulio Prisco declares that I make the equivalent of the statement that 4+4=2 and not only deny that 2+2=4 but insult those who say so, this is a difference created by the "imprecision" of my playful usage of terms as opposed to Prisco's precise use as a "serious scientist"? This imprecision of mine and not theirs accounts for why Ben Goertzel quoted that assertion by Prisco and affirmed it as the reason he has decided I distort arguments with which I disagree and so there is no reason to engage with my criticisms? Can I assume that the precision of scientific terminology is in evidence in the documents collected at the Order of Cosmic Engineers, which both Goertzel and Prisco helped found together? It seems to me that it is precisely New Age rhetoric that one is reminded of when reading those materials by these oh so strict-brained scientists. Now, you obviously haven't asserted that, I am just wondering if such a position is entailed by what you do indeed seem to be claiming on their behalf -- in asking this question am I outrageously distorting your views, am I insulting you somehow? Do you think I say these things to mistreat you or to understand the substance and implications of your claims? If Prisco and Goertzel are indeed capable of loose talk in their advocacy as "Cosmic Engineers," when and where else can that be true without beginning to threaten the firewall you are erecting between their strictness and my own playfulness as the supposed source of our differences? Is it only in the argument about whether or not I can reasonably notice sub(cult)ural formations of futurology can get a bit, er, "culty" that my own "loose talk" becomes a problem? Was strict scientific thinking similarly in evidence in Goertzel's arguments about transtopian Nauru, the ones that actually occasioned these exchanges as well as the charges that I am distorting their views in drawing from them entailments or satirizing them in ways they dislike? All that aside, I still do not agree that Ben Goertzel and Giulio Prisco deserve to be regarded as scientists more than rhetoricians in making their futurological claims. Although transhumanists and singularitarians are eager to paint such refusals of mine as signs of my menacing humanities relativism or woozy illiteracy it is in fact precisely because I respect the role of consensus science in the administration of a world equal to our shared planetary problems that I refuse to grant superlative futurology the status of scientificity it craves for its religious and ideological promises. I think you may underestimate a bit the precision that drives no small amount of my rhetorical formulations, as it happens. But, come what may, I certainly see the sense of the sort of style difference you are proposing here as the source of certain mis-communications -- it's a venerable problem, after all, Snow's Two Culture's again, or even more venerably Huxley versus Arnold again -- but I honestly don't think that it is really in play so much in the case at hand.

Who can resist a challenge like that? But let me respond by just summing up where I agree and disagree with your critique, Dale.

I agree that transhumanists can say and do foolish and crass things. I agree that some of their characteristic notions about reality and the future will prove to be naive or just wrong. But I have to endorse their belief in the possibility of "superintelligence, superlongevity, superabundance", as you put it, as a consequence of scientific understanding of the brain, the gene, and the atom, respectively.

Since your focus is on Actually Existing Superlative Futurology, you engage more with the cranks and visionaries who right now think they can see a clear path to transhumanity, and not so much with the broader question of whether such things are in principle possible or impossible. Still, the implicit judgment I get from you is: impossible. And I disagree, obviously.

Whatever hype and exaggeration may surround the unfinished scientific models of today, in the long run we are going to have an understanding of life and mind that stretches from the basic molecules all the way up to the recursive intricacies of intersubjectivity, without cheating, with nothing hidden, and with all the depth and awesomeness of the truth out in the open. And since human life and human mind are not divine archetypes fashioned complete and perfect from their first instantiation, but rather highly contingent structures produced by a blind process of ruthless competition, and since the material processes which produce them will not only be conceptually accessible to us but also materially accessible, capable of being modified and redesigned - it just seems incredibly unlikely that we can't do better, once we know what we're doing.

The real problem with today's recipes for transhumanity is just that we don't yet know what we're doing; we increasingly have the capacity to conduct experiments which meddle with or imitate those basic processes, but we don't have a lucid understanding of the consequences. The role of science with respect to transhumanism is not to debunk the concept in its totality and for all time, but rather to help us make a better, reality-based transhumanism, mostly by clearing away wishful thinking about how transhumanity might be attained by some simple formula.

I was referring to the use of the word "cult", which you by your own admission don't use according to a strict dictionary definition. But I agree that futurology makes claims that are not well defined, can't be tested empirically, and don't constitute science. Which is why so many scientists are skeptical of transhumanist claims (re the Technology Review challenge).

BTW, it's noteworthy that the main objection of the reviewers in the TR challenge was that SENS is so speculative that it can't be evaluated scientifically. In other words, it doesn't constitute science.

Transhumanists aren't scientists any more than are boner pill hucksters or financial fraudsters peddling -- in highly technical terminologies -- bundled debts transubstantiated into sound investments. Neither are the prophetic and priestly utterances about superlongevity, superintelligence, and superabundance scientific hypotheses any more than is the promise of a priest that faith in the blood of Christ is the key to eternal paradise or a used car salesman's promise that a blood-red sports car will make a tired pudgy boring stock broker youthful and sexy.

The problem with transhumanists isn't that we don't know what the future will be, but that transhumanists are conducting themselves in the present, they are responding symptomatically in the present, their effects are present-effects, and it isn't hard to know at all what is afoot with them, it is in fact clear as day.

If superlative futurology were fumigated of all its exaggeration, hyperbole, excess, techno-transcendentalizing mumbo-jumbo it would just turn into conventional progressive scientifically-literate advocacy for a harm-reduction policy model for healthcare, drug policy, policing, advocacy for increased education spending and science research, the eschewal of panoptic models for network and software security, more stimulus for renewable energy, mass transit, reforestation, and polyculture, and so on.

That's not the draw of the "transhuman," and quite clearly not -- it isn't about science, it's about the derangement of science and development policy in the service of infantile reassurance provoked by the distress of ineradicable human finitude (mortality, dis-ease, error, humiliation, loss, precarity). At any rate, that's how I see it.

Hi, Martin -- many good points. I did understand that you referred to my derisive use of the term "cult" in particular. What I didn't understand was why a reaction to just that word would be adjudicated in terms of scientific strictness when otherwise transhumanists, singularitarians, techno-immortalists, nano-cornucopiasts, et al -- although forever loudly handwaving about their superior scientificity -- are clearly engaging in effusively rhetorical, ideological, transcendentalizing discourses that seem very much more my neck of the woods as targets of analysis than proper science.

Robin Zebrowski weighs in on her blog.http://www.firepile.com/robin/?p=556

This isn’t really how criticism worksOctober 18, 2010 at 2:23 pm

I. . . noticed that Humanity+’s undercover branch the IEET is sponsoringa 1-day workshop. . . called “The Problems of Transhumanism.” I assumedthat meant there would be a critical eye cast upon the (sadly, many)real problems in both theory and practice with “transhumanism,” butthen I saw the speaker schedule. At least 4 of the 9 speakers are eitheron the board of H+ or IEET, known cheerleaders for the cause, and forall I know as many as 8 of them are (I recognized one name as a known“bioconservative,” so he’s unlikely to have an affiliation.)

This is not how criticism of a movement or theory works. You don’tget the board members to market their position while masquerading itas an academic conference. You also don’t temper the self-promotionwith people (a person?) whose views are so strongly ideologicallyopposite as to almost be a straw man parody of the view that’ssupposed to be under scrutiny. I’m profoundly disappointed to seethat very little actual criticism is likely to occur at this “workshop”. . .I’ve seen enough discussion of some of these topics in academic circlesto know they *could* have found unaffiliated people to do this work,so what it really means to me is that they didn’t want to. . .

"Humans are inclined to evaluate the world around them with somebias. . . [O]rganized religion, particularly Mormonism, tendsto reinforce and exploit them. An understanding of human biaseswas tremendously helpful to me in my recovery period from theMormon groupthink. . .

_Standing for Something More: The Excommunication of Lyndon Lamborn_http://www.amazon.com/Standing-Something-More-Excommunication-Lamborn/dp/1438947437

3. Unquestioned belief in the morality of the group, causing members to ignore the consequences of their actions

4. Stereotyping those who are opposed to the group as weak, evil, disfigured, impotent, or stupid

5. Direct pressure to conform placed on any member who questions the group, couched in terms of "disloyalty"

6. Self censorship of ideas that deviate from the apparent group consensus

7. Illusions of unanimity among group members, silence is viewed as agreement

8. Mindguards self-appointed members who shield the group from dissenting information"

Chapter 8, "Mind Control, Part 2"p. 80

"The reader is. . . left to decide if Mormonism qualifies as adestructive cult based on the evidence presented. . . Steven Hassan[_Releasing the Bonds_] clarifies the cult judgment criteria:

'It is not necessary for every single item on [the] list to bepresent. Mind-controlled cult members can live in their ownapartments, have nine-to-five jobs, be married with children, andstill be unable to think for themselves and act independently.'"

"I agree that some of their characteristic notions about reality and the future will prove to be naive or just wrong."

And yet so many of them are organizing their lives around ideas that are incredibly speculative. We know that rationality isn't just about being right, but about having confidence that scales with the evidence.

Your greatest proponent of Rationality suffers from this irrationality more than anyone. You wave it off as passion and idealism.

Whatever hype and exaggeration may surround the unfinished scientific models of today

You're calling transhumanist ideas "unfinished scientific models"? Real scientists have concluded that they are so incomplete that they aren't scientific.

Think of it this way: it is as unscientific to claim that a Singularity or radical longevity will happen as it is to claim that aliens exist in the Andromeda galaxy. It's certainly physically possible, but has no scientific basis whatsoever.

The real problem with today's recipes for transhumanity is just that we don't yet know what we're doing

it is as unscientific to claim that a Singularity or radical longevity will happen as it is to claim that aliens exist in the Andromeda galaxy.

Also, btw, this means that it is irrational to organize your life around the belief that the aliens will come in 2029 or 2045, or to engage in a "research program" to build a better radio telescope to communicate with the Andromedans. But this is what transhumanists are doing.

> [A] great example of something that's poorly defined in> transhumanism is the Singularity itself. . .>> [I]t is irrational to organize your life around the belief> that the [Singularity] will come in 2029 or 2045. . .

And Michael Anissimov replied:

> The people really pursuing the Singularity don't predict a specific date.> I know dozens of them. Only one book gives those dates, no one takes them> that seriously.

Toward the end of the book, Lamborn mentions the lateGordon B. Hinckley, the 15th prophet of the LDS (Mormon) church.http://en.wikipedia.org/wiki/Gordon_B._Hinckleyand how he managed to finesse and fuzz out the more controversialdoctrines of Mormonism when he was asked about themin public.

"His life's work and legacy may be summed up by recounting somerevealing moments in his life. The reader is left to draw his/herown conclusion. . .

Hinckley comments on a key doctrinal question [in an interviewwith the religion writer of the _San Francisco Chronicle_]

Q: There are some significant differences in your beliefs. Forinstance, don't Mormons believe that God was once a man?

A: I wouldn't say that. There was a couplet coined, 'As man is,God once was. As God is, man may become.' Now that's more of acouplet than anything else. That gets into some pretty deeptheology that we don't know very much about. . .

Q: Is this the teaching of the church today, that God the Fatherwas once a man like we are?

A: I don't know that we teach it. I don't know that we emphasizeit. . .

Compare these responses to the actual teachings of Joseph Smith. . .

'God himself was once as we are now, and is an exalted man,and sits enthroned in yonder heavens! That is the great secret.If the veil were rent today. . . you would see him like a manin form -- like yourselves in all the person, image and veryform as a man. . .'

It is also clear that this doctrine is still taught today. The firstchapter of the 1992 edition of the the Latter-day Saint teachingmanual. . . quotes directly from the above passage."

I. . . noticed that Humanity+’s undercover branch the IEET is sponsoringa 1-day workshop. . . called “The Problems of Transhumanism.” I assumedthat meant there would be a critical eye cast upon the. . . real problemsin both theory and practice with “transhumanism,” but then I saw thespeaker schedule. At least 4 of the 9 speakers are either on the boardof H+ or IEET, known cheerleaders for the cause, and for all I know as manyas 8 of them are (I recognized one name as a known “bioconservative,” sohe’s unlikely to have an affiliation.)

This is not how criticism of a movement or theory works. You don’t get theboard members to market their position while masquerading it as an academicconference. You also don’t temper the self-promotion with people (a person?)whose views are so strongly ideologically opposite as to almost be astraw man parody of the view that’s supposed to be under scrutiny.I’m profoundly disappointed to see that very little actual criticism islikely to occur at this “workshop” . . .(I’ve seen enough discussion of some of these topics in academic circlesto know they *could* have found unaffiliated people to do this work, sowhat it really means to me is that they didn’t want to. . .)

I am skeptical that there are really so many people which have"overconfident faith in magical self-modifying AI", as Robin Hansonargues. If they exist, can we find a few quotes? The only personthat comes to mind edging in that direction is Hugo de Garis.

Now see, I could've sworn that "magical self-modifying AI"was going to be the **engine** of the "S^".

But maybe that was, uh, more of a couplet than anythingelse, and gets into some pretty deep theology that we don'tknow very much about. . . Or something. I'll have toconsult Minitrue Recdep and get back to you.

Some transhumanists *are* scientists, of course. But transhumanism isn't science, it's an anticipation of technologies made possible by science.

The idea of landing on the moon wasn't exactly a scientific hypothesis, either. It was an engineering hypothesis, but it became thinkable because of science, and it was empirically verified by the act itself. The same may be said of the various superlatives, except that they haven't been "verified" yet.

"If superlative futurology were fumigated of all its exaggeration [etc]... it would just turn into... boring sensible indispensable mainstream-legible social democracy focused on technoscience issues...

"[T]he draw of the "transhuman" ... [is] about the derangement of science and development policy in the service of infantile reassurance provoked by the distress of ineradicable human finitude (mortality, dis-ease, error, humiliation, loss, precarity)."

I can almost agree with this last statement - and this is one reason why I think the critique of superlativity has something to teach transhumanists - but there's no way that the limits of the possible fit within the confines of "boring mainstream social democracy". The political culture of a social democracy would have to become radically futurist by current standards (or explicitly luddite and anti-futurist) if it were to meet the triple challenge of artificial intelligence, nanotechnology, and outer space, while still hanging on to its political forms. As I said, we are increasingly in a position to conduct utterly unprecedented existential experiments, creating new forms of life and mind which may be our evolutionary successors, and that is a situation and a responsibility for which almost no-one is prepared.

"it is as unscientific to claim that a Singularity or radical longevity will happen as it is to claim that aliens exist in the Andromeda galaxy. It's certainly *physically possible*, but has no *scientific basis* whatsoever."

The basis for such claims is somewhat different. In the absence of direct evidence, someone who says there must be life in Andromeda is making a guess about how often life develops in the universe, e.g., at least once per galaxy. Such estimates are highly speculative, but we do at least know that life is physically possible, because it exists right here on Earth.

On the other hand, we have no such existence proof for the physical possibility of superintelligence and superlongevity, but if they *are* physically possible, then isn't it extremely likely that humanity will aim to achieve them? For the superlative conditions, whether or not they are possible really is the crux of the debate. As I said to Dale just now, they are engineering hypotheses, not scientific hypotheses, but the plausibility of an engineering hypothesis usually depends on its consistency with science.

In my book, the case for superlongevity rests primarily on our demonstrated capability to gradually understand how living matter works, and our demonstrated capability to intervene in its processes even at the most elementary level. The long-range implication is that we will be able to recreate in an old body the conditions which originally produced a young body.

As for superintelligence, I believe it's possible because of the theory of algorithms in computer science, the properties of computer hardware (speed) and software (exactness, duplicability, analysability), and the evidence from cognitive and computational neuroscience that *human* intelligence also has an algorithmic basis (e.g. that we recognize objects because our brains perform highly specific transformations and categorizations). I believe consciousness plays a role in our cognition but has not been a feature of any artificial computer so far, so there are a few conceptual breakthroughs still to be made in this area, but what can be achieved just with unconscious computation is already enough for me to expect an intelligence "singularity".

In a surprise move, AI dead-ender mis-identifies organismic brain as a digital computer then offers this confusion as evidence that digital computer can become intelligent. I do wish futurologists would preface their remarks with -- stop me if you've heard this one before. Because we always have. So, you know, stop.

Mitchell: That's nice, but I think you missed the thrust of my argument, re currently practicing transhumanists. The question isn't whether AGI or radical longevity are possible someday, far in the future, but whether there is any rational justification for organizing your life around such expectations today (ie, being a self-professing and practicing transhumanist).

There is a nonzero probability that a technologically advanced civilization lives in the Andromeda galaxy and that they will visit us in my lifetime, therefore constituting a different kind of Singularity. But I have no reason to believe that claim, and I certainly have no rational justification for organizing my life around that expectation.

If you are an activist for transhumanism, if you write books or blogs advocating transhumanism, if you are a member of a transhumanist organization, certainly if you hold an office in such an organization, or if you do "research" ostensibly in pursuit of specific transhumanist goals, then you are to one extent or another organizing your life around certain expectations, and you must believe those outcomes are imminent -- ie, they will happen in your lifetime, and probably sooner rather than later. It is this lifeway which I claim to be irrational and unjustified (not just idle speculation about these things), because you have no idea when (or if) that stuff will happen.

So you have two options. Either transhumanists are merely engaging in idle speculation -- some fun brainstorming about what the future could be like -- in which case transhumanism is a futurist fan club, or they are seriously pursuing "transhumanist goals", in which case they are blinded by an irrational hope and faith in the imminent technological transformation of society. Which camp do you belong in?

"AI dead-ender mis-identifies organismic brain as a digital computer then offers this confusion as evidence that digital computer can become intelligent."

I chose my words carefully. A brain doesn't learn, retrieve memories, plan an action, or produce a sentence just by being organismic. The performance of omplex abstract tasks requires complex abstract methods, and the brain accomplishes its basic cognitive tasks by specific computational methods. Those methods are implemented organismically, but the *reason* why the neurons get the job done is because they are performing some appropriate algorithm, such as temporal difference learning or Gaussian derivative filtering.

Consciousness and intentionality pose a profound challenge to science, but there is an unconscious mechanistic level of intelligence for which an analysis in terms of algorithms is entirely appropriate; and even the role of consciousness, on a formal and causal level, must admit a similar characterization. Whatever functional role consciousness plays in cognition, it is able to play that role because it has the right cause-and-effect relationship to the unconscious processes. Even conscious cognition must admit an algorithmic or meta-algorithmic description - that is, it must achieve what it does in a particular way.

"The question isn't whether AGI or radical longevity are possible someday, far in the future, but whether there is any rational justification for organizing your life around such expectations today"

Surely it's possible to be irrational about it. But hello, we already share the world with giant distributed AIs which perform pattern matching tasks (search engines) and with organisms that were grown from a blueprint assembled artificially (Venter's microbe). How extreme do things have to get before it finally becomes rational to take it personally?

> So you have two options. Either transhumanists are merely engaging> in idle speculation -- some fun brainstorming about what the future> could be like -- in which case transhumanism is a futurist fan club,> or they are seriously pursuing "transhumanist goals", in which case> they are blinded by an irrational hope and faith in the imminent> technological transformation of society.

It gets worse than that. Not only is "seriously pursuing 'transhumanistgoals'" a case of being "blinded by an irrational hope and faithin the imminent technological transformation of society", it:

1. Creates a fertile field for "gurus" who claim to havethe knowledge to lead the way to this technological transformationto arrogate power and publicity for themselves, independentlyof whether a sober observer would consider their qualificationsand achievements deserving of that kind of influence.

2. Tempts the naive and starry-eyed followers of these gurus toabdicate independent thought and judgment, in the name of savingthe world (and also thereby saving themselves -- from death,from uncertainty and fear, from meaninglessness, fromloneliness, from sheer boredom, whatever).

3. To the extent that the orthodoxies that the guru(s) pump outare taken seriously by followers and promulgated as anideology among the general public, the **actual science**that may be taking place in the fields "commandeered" bythe gurus and their enthusiastic followers,and the understanding of that science among the generalpublic, together with the political processes that fundallocate resources to scientific research, stand to be distorted andskewed by the ideologically-toned certainties prematurelytouted as "rationality" by the gurus. To that extent, the activitiesof the transhumanists may actually be **counterproductive**to their ostensible goals. Anybody, for example, beginningserious study of a field involving the workings of the human brainand mind has to pass through (and overcome the activelymisleading distractions of) a gauntlet of noise churnedout nowadays by the transhumanists (amongothers). Ask Robin Zebrowski!

4. To the extent that the breathless discourse surroundingthese things (securing eternal life, facilitating thespread of "superintelligence" throughout the universe,saving the world, etc.) whoops up both irrational hopesand fears, it's an invitation to fanaticism. In theextreme, who cares about harassing, or even knocking off,a few people if the stakes in the success or failure ofthe movement are transcendental? Worked for the Mormons,works for the Scientologists (both of whose originshad science-fictional overtones, though the formerhappened in the 1820s before the literary genre hadbeen invented, whereas the latter in the 1950s wasexplicitly involved with the SF community, muchlike transhumanism today), works for countlessother more penny-ante cults.

5. Guru-led cults **always** turn authoritarian and anti-progressive. And they are ripe for appropriation andmanipulation by incumbent interests, as Dale isforever pointing out. Transhumanism is no differentin this regard.

Martin: If your argument is about how stimulating ideas (gods, aliens, superintelligent robots, etc.) can skew probability estimates, I agree. Just being fun/scary to think about doesn't make the aliens more likely to land. However the weirdness of the subject doesn't make it *less* likely either.

Mitchell, who "chooses his words carefully": hello, we already share the world with giant distributed AIs.

It has always seemed to me that the primary impact of the pointless and over-eager over-application of the term "intelligence" to that which is not is to render us all ever more insensitive to the richness of experience and actual concomitant demands of the precious beings who are.

AI discourse produces especially in its advocates, but also in the cultures in which its frames and figures become prevalent, nothing short of a kind of widespread artificial imbecillence.

From a related Futurological Brickbat: XXXI. Computer science in its theological guise aims less at the ultimate creation of artificial intelligence than in the ubiquitous imposition of artificial imbecillence.

Superlative Futurological discourses are not just "fun" "scary" idle speculation. Else, transhumanists, singularitarians, techno-immortalists, nano-cornucopiasts, and the rest would admit they are simply a kind of science fiction fandom (perhaps a fandom fixated on that lamest and least demanding genre of science fiction, pop technoscience/futurology) rather than peddle themselves as engaged in techno-transcendentalizing variations of serious science or serious developmental policy discourse.

Superlative Futurology is, of course, an ideological formation with an undeniably theological coloration, an extreme form of the prevailing, blandly fraudulent futurological marketing/promotional discourse that suffuses neoloberal-neoconservative global developmentalism.

It is in its sub(cult)ural organization as a defensive marginal "identity-movement" with tendencies to underqualified pseudo-scientific enthusiasms, pseudo-science peddled to True Believers by guru wannabes that the superlative futurologists are vulnerable around the edges (to be generous) to derisive charges of cultishness.

My "Condensed Critique of Transhumanism" is here if you want reminding of it.

SF: You do have a character in _Rainbows End_ called the Rabbit,or Mysterious Stranger, and it’s hypothesized by some of the charactersin the book that it could be an AI, but you never state implicitlywhether or not it is.

VV: That’s true.

------------------------

Compared to what readers assume to be meant by "AI" in the context of a Vernor Vinge(or any other SF author's) novel (or for that matter in the context of Dr. Vinge's1993 intendedly non-fiction "The Coming Technological Singularity: How to Survivein the Post-Human Era"http://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html )or indeed in Mitchell Porter's TRANSHUMANISM AND THE SINGULARITYhttp://transtopia.tripod.com/semper.html(and I'm willing to bet that the author of that essay is thethe same "Mitchell" as the one contributing to this commentthread), who remarks in that essay,

"The ability to put atoms where you want to has major consequences. . .These include. . . 'Santa Claus machines' which will make anythingpossible upon request (growing an android or a starship in your backyard. . .)[which could] in turn could lead to abundance and long life for all. . .superhuman artificial intelligence, and dangers worse than nuclear warfare. . .",

a more accurate description of the present might be (echoing "Mitchell"),"Hello, we already share the world with giant distributed addingmachines."

This sort of equivocation surrounding the term "artificial intelligence", thealternation ad libitum between (a) the dubious application of the phrase tocurrent technologies -- telephone networks, or digital computers, oreven networked digital computers, and (b) the putative non-fictionalizingof science-fictional tropes exemplified by the Vinge and Porter essays, is (in the mostgenerous construal) symptomatic of genuine confusion on the partof those who switch(eroo) the usage in this way, or at worsta deliberate attempt at flim-flam.

It has its roots in the 40s and 50s journalists who described early(and monstrous in both size and expense) digital computers (simulationsof which, ironically, will fit in the tiniest corner of a modernconsumer PC, an appliance which no sane person would consider "intelligent"in the usual sense of the word) as "thinking machines".

The tendency was further lambasted in the 60s when Joseph Weizenbaumwrote "Eliza" to demonstrate how easy it is to bamboozle naive peopleinto attributing "intelligence" to a fairly unsophisticated automaton.

I notice that no-one has chosen to dispute or otherwise comment on my observation that the human brain gets things done, not just by virtue of being "organismic" (or embodied or fleshy or corporeal), but because its constituent neurons are arranged so as to perform elaborate and highly specific transformations of input to output, which correspond to specific cognitive functions like learning and memory, and which, at the mathematical level of description, fall squarely within the scope of the subfield of theoretical computer science which studies algorithms.

Under other circumstances, I'd be happy to have a freewheeling discussion about the subjective constitution of imputed intentionality in the practice of programming, or the right way to talk about the brain's "computational" properties without losing sight of its physicality, or exactly why it is that consciousness presents a challenge to the usual objectifying approach of natural-scientific ontology.

But however all that works out, and whatever subtle spin on the difference between natural and artificial intelligence best conveys the truth... at a crude and down-to-earth level, it is indisputable that the human brain is full of specialized algorithms, that these do the heavy lifting of cognition, and that such algorithms can execute on digital computers and on networks of digital computers.

That is why you can't handwave away "artificial intelligence" as a conceptual confusion. If you want to insist that the real thing has to involve consciousness and the operation of consciousness, and that this can't occur in digital computers, fine, I might even agree with you. But all that means is that the "artificiality" of AI refers to something a little deeper than the difference between being manufactured and being born. It does not imply any limit on the capacity of machines to emulate and surpass human worldly functionality.

The conceptual confusion mentioned above has to do withthe equivocation between the phrase "artificial intelligence"as used in science fiction stories and among transhumanists"entre eux", and the phrase **very** loosely (andmisleadingly) applied to, say, what Google does.

> If you want to insist that the real thing has to involve> consciousness and the operation of consciousness. . .

Nobody said this, but it does seem likely to me (a nonexpert) thatany entity exhibiting anything like "intelligence"in the (admittedly imprecise) usual meaning of the wordwould also be likely to have "consciousness" (an equallyimprecise term, despite the efforts of generations ofphilosophers) imputed to it, even by sophisticated observers.

But that point is certainly not central to **my** "beef"with the >Hists.

> . . .and that this can't occur in digital computers. . .

It almost certainly can't occur in anything with "Intel Inside",or in IBM's Blue Gene, or in anything else currently onthe drawing boards or even in anything remotely on thehorizon. It's a difference (from the run-of-the-mill>Hist breathlessness on the topic) that makes a difference.

> . . .fine, I might even agree with you. But all that means> is that the "artificiality" of AI refers to something a little> deeper than the difference between being manufactured and> being born. It does not imply any limit on the capacity of> machines to emulate and surpass human worldly functionality.

Ah, now we're talking about "machines" (meaning, presumably,some future technology of an unspecified nature, rather than **digitalcomputers** as we currently know and love them).

A quote from my archive:

"[Are] artifacts designed to have primary consciousness...**necessarily** confined to carbon chemistry and, more specifically,to biochemistry (the organic chemical or chauvinist position)[?]The provisional answer is that, while we cannot completelydismiss a particular material basis for consciousness in theliberal fashion of functionalism, it is probable that there willbe severe (but not unique) constraints on the design of anyartifact that is supposed to acquire conscious behavior. Suchconstraints are likely to exist because there is every indicationthat an intricate, stochastically variant anatomy and synapticchemistry underlie brain function and because consciousness isdefinitely a process based on an immensely intricate and unusualmorphology"

"At every stage of technique since Daedalus orHero of Alexandria, the ability of the artificerto produce a working simulacrum of a livingorganism has always intrigued people. This desireto produce and to study automata has always beenexpressed in terms of the living technique ofthe age. In the days of magic, we have the bizarreand sinister concept of the Golem, that figure ofclay into which the Rabbi of Prague breathed inlife with the blasphemy of the Ineffable Name ofGod. In the time of Newton, the automaton becomesthe clockwork music box, with the little effigiespirouetting stiffly on top. In the nineteenthcentury, the automaton is a glorified heat engine,burning some combustible fuel instead of the glycogenof the human muscles. Finally, the present automatonopens doors by means of photocells, or points gunsto the place at which a radar beam picks up anairplane, or computes the solution of a differentialequation."

-- Norbert Wiener, _Cybernetics_ (1948)

--------------------------------------

> But computers are not just another technology; they are a new paradigm, a new> way about thinking about the relationship between humans and nature.

No, actually, they're just another technology.

Comparing minds to computers is a metaphor. In the 18th century, they used tocompare human beings to clockwork. That was a metaphor, too.

SF often 'literalizes metaphors'; however, one should avoid doing this in reallife as it leads to misunderstanding."

The idea that we are hepped-up computers is strictly fashion.The scientific community has a long and somewhat vain history ofpicking whatever technological marvels are current to be the modelof human consciousness; from clocks to heat engines to cyberneticfeedback loops to powerful CPUs. The more historical overview youcan achieve of the Western scientific enterprise, the more sillythis tendency looks."

> I read enough mind-brain books, that I'd like to> hear other people's guidelines for telling the wheat> from the chaff.

My guideline is very simple: if you see someone offer a reductive argumentpurporting to explain the properties of mind, such as consciousness,cognition, and intentionality, in terms of the alleged computationalproperties of the brain, you may conclude that he is a charlatan or anignoramus. This conclusion might be justified historically, by observingthe earlier attempts to explain the functioning of human mind by referenceto the capabilities of the dominant contemporary technology (e.g. clockworkmechanisms, chemistry, steam engines, etc.). . .

Hey, Jim -- you may want to re-post these observations and arguments under the new post I created for Mitchell's comment, since I think this deep down the blog-scroll and Moot few are still reading, but more will likely benefit from your points when they are prominent on a fresh posting.