Using Technology to Deepen Democracy, Using Democracy to Ensure Technology Benefits Us All

Saturday, October 23, 2010

Of Differently Intelligent Beings

Adapted and Upgraded from the Moot

"Mitchell" writes:

I notice that no-one has chosen to dispute or otherwise comment on my observation that the human brain gets things done, not just by virtue of being "organismic" (or embodied or fleshy or corporeal), but because its constituent neurons are arranged so as to perform elaborate and highly specific transformations of input to output, which correspond to specific cognitive functions like learning and memory, and which, at the mathematical level of description, fall squarely within the scope of the subfield of theoretical computer science which studies algorithms.

Under other circumstances, I'd be happy to have a freewheeling discussion about the subjective constitution of imputed intentionality in the practice of programming, or the right way to talk about the brain's "computational" properties without losing sight of its physicality, or exactly why it is that consciousness presents a challenge to the usual objectifying approach of natural-scientific ontology.

But however all that works out, and whatever subtle spin on the difference between natural and artificial intelligence best conveys the truth... at a crude and down-to-earth level, it is indisputable that the human brain is full of specialized algorithms, that these do the heavy lifting of cognition, and that such algorithms can execute on digital computers and on networks of digital computers.

That is why you can't handwave away "artificial intelligence" as a conceptual confusion. If you want to insist that the real thing has to involve consciousness and the operation of consciousness, and that this can't occur in digital computers, fine, I might even agree with you. But all that means is that the "artificiality" of AI refers to something a little deeper than the difference between being manufactured and being born. It does not imply any limit on the capacity of machines to emulate and surpass human worldly functionality.

My point is not that our intelligence is "just" embodied, but that it is indispensably so, and in ways that bedevil especially the hopes of those Robot Cultists who hope to code a "friendly" sooper-parental Robot God, or to "migrate" their souls from one materialization to others "intact" and quasi "immortalized."

That you can find maths to describe some or, maybe -- who now knows? (answer: nobody and certainly not you, whatever your confidence on this score, and also certainly not me) -- even much of the flavor of intelligence would scarcely surprise me, inasmuch as maths are, after all, so good at usefully getting at so much of the world's furniture.

I am happy to agree that it may be useful for the moment to describe the brain as performing specialized algorithms, among other things the brain is up to, and it is surely possible that these do what you call the "heavy lifting" of cognition. But that claim is far from "indisputable," and even if it turns out to be right that hardly puts you or anybody in a position to identify "intelligence" with "algorithms" in any case, especially if you concede "intelligence" affective dimensions (which look much more glandular than computational) and social expressions (which look far more like contingent stakeholder struggles in history than like beads clicking on an abacus).

Inasmuch as all the issues to which you allude in your second paragraph -- subjective imputation of intention, doing justice to the materiality that always non-negligibly incarnates information, to which I would add uninterrogated content of recurring metaphors mistaken in their brute repetition for evidence -- suffuse the discourse of GOFAI dead-enders, cybernetic totalists, singularitarians, and upload-immortalists I do think you better get to the "other circumstances" in which you are willing to give serious thought to critiques of them (my own scarcely the most forceful among them) sooner rather than later.

You will forgive me if I declare it seems to me it is you who is still indulging in handwaving here. As an example, in paragraph three, when you go from saying, harmlessly enough, that much human cognition is susceptible to description as algorithmic then make the point, obviously enough, that digital and networked computers execute algorithms, you hope that the wee word "such" can flit by unnoticed, un-interrogated, while still holding up all the weight of the edifice of posited continuities and identities you are counting on for the ideological GOFAI program and cyber-immortalization program to bear its fruits for the faithful. You re-enact much the same handwave in your eventual concession of the "something a little deeper" between even the perfect computers of our fancy and the human intelligences of our worldly reality, which may indeed be big enough and deep enough as differences go to be a difference in kind that is the gulf between the world we share and the techno-transcendence Robot Cultists pine for.

You know, your "colleague" Giulio Prisco likes to accuse me of "vitalism" for such points -- which to my mind would rather be like a phrenologist descrying vitalism in one who voiced skepticism about that pseudo-science at the height of the scam. So far you seem to be making a comparatively more sophisticated case, bless you -- we'll see how long that lasts -- but the lesson of Prisco's foolishness is one you should take to heart.

I for one have never claimed that intelligence is in any sense supernatural, and given its material reality you can hardly expect me to deny it susceptibility of mathematical characterizations. It's true I have not leaped on futurological bandwagons reducing all of intelligence to algorithms (or the whole universe to the same), seeing little need or justification for such hasty grandiloquent generalizations and discerning in them eerily familiar passions for simplicity and certainty (now amplified by futurologists with promises of eternal life and wealth beyond the dreams of avarice) that have bedeviled the history of human thought in ways that make me leery as they should anybody acquainted with that history.

But I am far from thinking it impossible in principle that a non-organismic structure might materially incarnate and exhibit what we would subsequently describe as intelligent behavior -- though none now existing or likely soon to be existing by my skeptical reckoning of the scene do anything like this, and I must say that ecstatic cheerleading to the contrary about online search engines or dead-eyed robotic sex-dolls by AI ideologues scarcely warms me to their cause. Upon creating such a differently-intelligent being, if we ever eventually were to do as now we seem little likely remotely capable of, we might indeed properly invite such a one within the precincts of our moral and interpretative communities, we might attribute to such a one rights (although we seem woefully incapable of doing so even for differently materialized intelligences that are nonetheless our palpable biological kin -- for instance, the great apes, cetaceans).

That such intelligence would be sufficiently similar to human intelligence that we would account it so, welcome it into our moral reckoning, recognize it the bearer of rights, is unclear (and certainly a more relevant discussion than whether some machines might in some ways "surpass human... functionality" which is, of course, a state of affairs that pervades the made world already, long centuries past, and trivially so), and not a subject I consider worthy of much consideration until such time as we look likely to bring such beings into existence. I for one, see nothing remotely like so sophisticated a being in the works, contra the breathless press releases of various corporate-militarist entities hoping to make a buck and certain Robot Cultists desperate to live forever, and in the ones who do one tends to encounter I am sorry to say fairly flabbergasting conceptual and figurative confusions rather than much actual evidence in view.

Indeed, so remote from the actual or proximately upcoming technodevelopmental terrain are such imaginary differently-materialized intelligences that I must say ethical and political preoccupations with such beings seem to me usually to be functioning less as predictions or thought-experiments but as more or less skewed and distressed allegories for contemporary political debates: about the perceived "threat" of rising generations, different cultures, the precarizing loss of welfare entitlements, technodevelopmental disruptions, massively destructive industrial war-making and anthropogenic environmental catastrophe, stealthy testimonies to racist, sexist, heterosexist, nationalist, ablest, ageist irrational prejudices, all mulching together and reflecting back at us our contemporary distress in the funhouse mirror of futurological figures of Robot Gods, alien intelligences, designer babies, clone armies, nanobotic genies-in-a-bottle, and so on. I suspect we would all be better off treating futurological claims as mostly bad art rather than bad science, subjecting it to literary criticism rather than wasting the time of serious scientists on pseudo-science.

Be all that as it may, were differently-materialized still-intelligent beings to be made any time soon, whatever we would say of them in the end, the "friendly" history-shattering post-biological super-intelligent Robot Gods and soul migration and cyberspatial quasi-immortalization schemes that are the special "contribution" of superlative futurologists to the already failed and confused archive of AI discourse would remain bedeviled by still more logical and tropological pathologies (recall my opening paragraph), and as utterly remote of realization or even sensible formulation as ever.

36 comments:

For what it's worth (and I hope this doesn't exhaust thepatience even of our blog host -- for most >Hists, I'm afraid,it would be considered "TL;DR") here's an excerpt from a discussionof some of Gerald M. Edelman's books which I once posted inan on-line >Hist forum. The books referred to are:

Most of the subtleties discussed by Edelman and other serious figuresin neuroscience and the philosophy of mind are completely dismissedor elided by the usual crowd of >Hist cheerleaders, who seem tohave a view of "AI" that owes more to the theories of mind ofAyn Rand than to anything that would countas cutting-edge neuroscience **or** philosophy today.

----------------------------------------

At many points in these books, Edelman stresses his belief thatthe analogy which has repeatedly been drawn during the past fiftyyears between digital computers and the human brain is a falseone (BABF p. 218), stemming largely from "confusions concerningwhat can be assumed about how the brain works without botheringto study how it is physically put together" (BABF p. 227). Thelavish, almost profligate, morphology exhibited by the multiplelevels of degeneracy in the brain is in stark contrast to theparsimony and specificity of present-day human-made artifacts,composed of parts of which the variability is deliberatelyminimized, and whose components are chosen from a relativelylimited number of categories of almost identical units.Statistical variability among (say) electronic components occurs,but it's usually merely a nuisance that must be accommodated,rather than an opportunity that can be exploited as a fundamentalorganizational principle, as Edelman claims for the brain. Inhuman-built computers, "the small deviations in physicalparameters that do occur (noise levels, for example) are ignoredby agreement and design" (BABF p. 225). "The analogy between themind and a computer fails for many reasons. The brain isconstructed by principles that ensure diversity and degeneracy.Unlike a computer, it has no replicative memory. It ishistorical and value driven. It forms categories by internalcriteria and by constraints acting at many scales, not by meansof a syntactically constructed program. The world with which thebrain interacts is not unequivocally made up of classicalcategories" (BABF p. 152).

This contrast between the role of stochastic variation in thebrain and the absence of such a role in electronic devices suchas computers is one of the distinctions between what Edelmancalls "instructionism" in his own terminology (RP p. 30), but hasalso been called "functionalism" or "machine functionalism" (RPp. 30; BABF p. 220); and "selectionism" (UoC p. 16; RPpp. 30-33). Up to the present, all human artifacts and machines(including computers and computer programs) have been based onfunctionalist or instructionist design principles. In thesedevices, the parts and their interactions are precisely specifiedby a designer, and precisely matched to expected inputs andoutputs. This is a construction approach based on costconsciousness, parsimonious allocation of materials, and limitedlevels of manageable complexity in design and manufacture. Theworkings of such artifacts are "held to be describable in afashion similar to that used for algorithms".

By analogy to the hardware-independence of computer programs,functionalist models of neural "algorithms" underlying cognitionand behavior have attempted to separate these functions fromtheir physical instantiation in the brain: "In the functionalistview, what is ultimately important for understanding psychologyare the algorithms, not the hardware on which they areexecuted... Furthermore, the tissue organization and compositionof the brain shouldn't concern us as long as the algorithm 'runs'or comes to a successful halt." (BABF p. 220). In Edelman'sview, the capabilities of the human brain are much moreintimately dependent on its morphology than the functionalistview admits, and any attempt to minimize the contribution of thebrain's biological substrate by assuming functional equivalencewith the sort of impoverished and rigid substrates characteristicof modern-day computers is bound to be misleading.

On the other hand, "selectionism", according to Edelman, isquintessentially characteristic of biological systems (such asthe brain), whose fine-grained structure (not yet achievable byhuman manufacturing processes, but imagined in speculations aboutmolecular electronics, nanotechnology, and the like) permitsluxuriantly large populations of statistically-varying componentsto vie in Darwinian competition based on their ability tocolonize available functional niches created by the growth of aliving organism and its ongoing interaction with the externalworld. The fine-grained variation in functional repertoiresmatches the fine-grained variation in the world itself: "thenature of the physical world itself imposes commonalities as wellas some very stringent requirements on any representation of thatworld by conscious beings... [W]hatever the mental representationof the world is at any one time, there are almost always verylarge numbers of additional signals linked to any chunk of theworld... [S]uch properties are inconsistent with a fundamental**symbolic** representation of the world considered as an**initial** neural transform. This is so because a symbolicrepresentation is **discontinuous** with respect to small changesin the world..." (RP p. 33).

Edelman's selectionist scenarios are highly dynamic, both interms of events within the brain and in terms of the interactionof the organism with its environment: "In the creation of aneural construct, motion plays a pivotal role in selectionalevents both in primary and in secondary repertoire development.The morphogenetic conditions for establishing primary repertoires(modulation and regulation of cell motion and process extensionunder regulatory constraint to give constancy and variation inneural circuits) have a counterpart in the requirement fororganismic motion during early perceptual categorization andlearning." (ND p. 320). "Selective systems... involve **twodifferent domains of stochastic variation** (world and neuralrepertoires). The domains map onto each other in an individual**historical** manner... Neural systems capable of this mappingcan deal with novelty and generalize upon the results ofcategorization. Because they do not depend upon specificprogramming, they are self-organizing and do not invokehomunculi. Unlike functionalist systems, they can take accountof an open-ended environment" (RP p. 31).

A human-designed computer or computer program operates upon inputwhich has been coded by, or has had a priori meaning assigned by,human beings: "For ordinary computers, we have little difficultyaccepting the functionalist position because the only meaning ofthe symbols on the tape and the states in the processor is **themeaning assigned to them by a human programmer**. There is noambiguity in the interpretation of physical states as symbolsbecause the symbols are represented digitally according to rulesin a syntax. The system is **designed** to jump quickly betweendefined states and to avoid transition regions between them..."(BABF p. 225). It functions according to a set of deterministicalgorithms ("effective procedures" [UoC p. 214]) and producesoutputs whose significance must, once again, be interpreted byhuman beings.

A similar "instructionist" theory of the brain, based on logicalmanipulation of coded inputs and outputs, cannot escape theembarrassing necessity to posit a "homunculus" to assign andinterpret the input and output codes (BABF pp. 79, 80 [Fig. 8-2],8). In contrast, a "selectionist" theory of the brain based oncompetition among a degenerate set of "effective structures [UoCp. 214]", can escape this awkwardness, with perceptual categoriesof evolutionary significance to the organism spontaneouslyemerging from the ongoing loop of sensory sampling continuouslymodified by movement that is characteristic of an embodied brain(UoC pp. 81, 214; ND pp. 20, 37; RP p. 532). It's clear thatEdelman, in formulating the TNGS (UoC Chap. 7; see also NDChap. 3; RP Chap. 3, p. 242; BABF Chap. 9) has generalized to thenervous system the insights he gained from his earlier work inimmunology, which also relies on fortuitous matching by abiological recognition system (BABF Chap. 8) between a novelantigen and one of a large repertoire of variantproto-antibodies, with the resulting selection beingdifferentially amplified to produce the organism's immuneresponse (BABF p. 76 [Fig. 8-2]).

Despite his dismissive attitude toward traditional "top-down",symbolic approaches to artificial intelligence, and to the sortsof neural-network models in which specific roles are assigned toinput and output neurons by the network designer, Edelman doesnot deny the possibility that conscious artifacts can beconstructed (BABF Chap. 19): "I have said that the brain is not acomputer and that the world is not so unequivocally specifiedthat it could act as a set of instructions. Yet computers can beused to **simulate** parts of brains and even to help buildperception machines based on selection rather than instruction...A system undergoing selection has two parts: the animal or organ,and the environment or world... No instructions come from eventsof the world to the system on which selection occurs, [and]events occurring in an environment or world are unpredictable...[W]e simulate events and their effects... as follows: 1. Simulatethe organ or animal... making provision for the fact that, as aselective system, it contains a generator of diversity --mutation, alterations in neural wiring, or synaptic changes thatare unpredictable. 2. Independently simulate a world orenvironment constrained by known physical principles, but allowfor the occurrence of unpredictable events. 3. Let the simulatedorgan or animal interact with the simulated world or the realworld without prior information transfer, so that selection cantake place. 4. See what happens... Variational conditions areplaced in the simulation by a technique called a pseudo-randomnumber generator... [I]f we wanted to capture randomnessabsolutely, we could hook up a radioactive source emitting alphaparticles, for example, to a counter that would **then** behooked up to the computer" (BABF p. 190).

Given that today's electronic technology is still one of relativescarcity (in terms of the economic limits on complexity),constructing a device possessing primary consciousness, using theprinciples of the TNGS, may not currently be feasible: "Inprinciple there is no reason why one could not by selectiveprinciples simulate a brain that has primary consciousness,provided that the simulation has the appropriate parts.But... no one has yet been able to simulate a brain systemcapable of concepts and thus of the **reconstruction** ofportions of global mappings... Add that one needs multiplesensory modalities, sophisticated motor appendages, and a lot ofsimulated neurons, and it is not at all clear whetherpresently-available supercomputers and their memories are up tothe task" (BABF pp. 193-194).

In a biological system, much of the physical complexity needed tosupport primary consciousness is inherent in the morphology ofbiological cells, tissues, and organs, and it isn't clear thatthis morphology can be easily dismissed: "[Are] artifactsdesigned to have primary consciousness... **necessarily**confined to carbon chemistry and, more specifically, tobiochemistry (the organic chemical or chauvinist position)[?]The provisional answer is that, while we cannot completelydismiss a particular material basis for consciousness in theliberal fashion of functionalism, it is probable that there willbe severe (but not unique) constraints on the design of anyartifact that is supposed to acquire conscious behavior. Suchconstraints are likely to exist because there is every indicationthat an intricate, stochastically variant anatomy and synapticchemistry underlie brain function and because consciousness isdefinitely a process based on an immensely intricate and unusualmorphology" (RP pp. 32-33). Perhaps the kinds of advancesprojected for the coming decades by such writers as Ray Kurzweil,based on a generalized Moore's Law predicting that a newtechnological paradigm (based on three-dimensional networks ofcarbon nanotubes, or whatever) will emerge when currentsemiconductor techniques reach their limits in a decade or two,will ease the current technical and economic limits on complexityand permit genuinely conscious artifacts to be constructedaccording to principles suggested by Edelman.

Edelman seems ambivalent about the desirability of constructingconscious artifacts: "In principle... there is no reason tobelieve that we will not be able to construct such artifactssomeday. Whether we should or not is another matter. The moralissues are fraught with difficult choices and unpredictableconsequences. We have enough to concern ourselves with in thehuman environment to justify suspension of judgment and thoughton the matter of conscious artifacts for a bit. There are moreurgent tasks at hand" (BABF pp. 194-195). On the other hand,"The results from computers hooked to NOMADs or noetic deviceswill, if successful, have enormous practical and socialimplications. I do not know how close to realization this kindof thing is, but I do know, as usual in science, that we are infor some surprises" (BABF p. 196).

Meanwhile, there is also the question of whether shortcuts can betaken to permit the high-level, linguistically-based logical andsymbolic behavior of human beings to be "grafted" ontopresent-day symbol-manipulation machines such as digitalcomputers, without duplicating all the baggage (as described bythe TNGS) that allowed higher-order consciousness to emerge inthe first place. A negative answer to this question remainsunproven, but despite such recent tours de force as IBM's "BigBlue" chess-playing system, Edelman is unpersuaded thattraditional top-down AI will ever be able to producegeneral-purpose machines able to deal intelligently with themessiness and unpredictability of the world, while at the sametime avoiding a correspondingly complex (and expensive) messinessin their own innards. Edelman cites three maxims that summarizehis position in this regard: 1. "Being comes first, describingsecond... [N]ot only is it impossible to generate being by meredescribing, but, in the proper order of things, being precedesdescribing both ontologically and chronologically"2. "Doing... precedes understanding... [A]nimals can solveproblems that they certainly do not understand logically... [W]e[humans] choose the right strategy before we understand why...[W]e use a [grammatical] rule before we understand what it is;and, finally... we learn how to speak before we know anythingabout syntax" 3. "Selectionism precedes logic." "Logic is... ahuman activity of great power and subtlety... [but] [l]ogic isnot necessary for the emergence of animal bodies and brains, asit obviously is to the construction and operation of acomputer... [S]electionist principles apply to brainsand... logical ones are learned later by individuals with brains"(UoC pp. 15-16).

Edelman speculates that the pattern-recognition capabilitiesgranted to living brains by the processes of phylogenetic andsomatic selection may exceed those of logic-based Turingmachines: "Clearly, if the brain evolved in such a fashion, andthis evolution provided the biological basis for the eventualdiscovery and refinement of logical systems in human cultures,then we may conclude that, in the generative sense, selection ismore powerful than logic. It is selection -- natural and somatic-- that gave rise to language and to metaphor, and it isselection, not logic, that underlies pattern recognition andthinking in metaphorical terms. Thought is thus ultimately basedon our bodily interactions and structure, and its powers aretherefore limited in some degree. Our capacity for patternrecognition may nevertheless exceed the power to provepropositions by logical means... This realization does not, ofcourse, imply that selection can take the place of logic, nordoes it deny the enormous power of logical operations. In therealm of either organisms or of the synthetic artifacts that wemay someday build, we conjecture that there are only twofundamental kinds -- Turing machines and selectional systems.Inasmuch as the latter preceded the emergence of the former inevolution, we conclude that selection is biologically the morefundamental process. In any case, the interesting conjecture isthat there appear to be only two deeply fundamental ways ofpatterning thought: selectionism and logic. It would be amomentous occasion in the history of philosophy if a third waywere found or demonstrated" (UoC p. 214).

Mitchell: I notice that no-one has chosen to dispute or otherwise comment on my observation that the human brain gets things done, not just by virtue of being "organismic" (or embodied or fleshy or corporeal), but because its constituent neurons are arranged so as to perform elaborate and highly specific transformations of input to output, which correspond to specific cognitive functions like learning and memory, and which, at the mathematical level of description, fall squarely within the scope of the subfield of theoretical computer science which studies algorithms.

Sure. Nervous systems are cleanly divided into sensory (input) nerves and motor (output) nerves, with some kind of signal processor in between, which can range from a few ganglia to the human brain. C. elegans is a nematode worm that has exactly 302 neurons, and people have created wiring diagrams of that simple "brain". They have looked for recurring patterns of neurons, which constitute computational modules. There's even software that models all this stuff.

Scaling up, the human brain is not qualitatively different from the C. elegans brain. It is basically a signal processor, and intelligence is some part of that processing.

That's great, but right now we're only modeling single cortical columns, which are about 10,000 neurons. We have a long way to go before we translate the entire brain into algorithms. And as I pointed out before, it's not just a problem of scanning or modeling the brain, but actaully understanding how it works. We can determine the 3D structure of proteins, and we understand the dynamics of protein folding, but we can't realiably simulate it with computers.

Of course, part of the problem is that molecules are fuzzy. I've done RNA secondary structure prediction with a folding program. It basically spits out a bunch of structures with different calculated free energies. If one structure has a free energy much lower than the others, then the RNA molecule has a high probability of folding into that structure. But if several structures have similar free energies, then the molecule may flip back and forth between all of them. That makes it difficult to predict whether there are useful / important secondary structures.

Please note, this is not a failure of computation. We know the hydrogen bonding energies between different nucleotides, so predicting the structures is just a matter of iteratively lining up the nucleotides in an RNA molecule against each other. The fundamental problem is that RNA molecules really do flip between multiple orientations and structures. I think this may be a big reason why protein folding is hard, even though the basic stereochemistry and bonding dynamics are well known.

So, with artificial intelligence, we don't know what we're in for. Modeling the brain, at least at a sufficiently low level, may turn out to be similarly intractable. Yes, I know, most AI proponents don't believe they need a low-level emulation. They just want to characterize the patterns of activity in networks of neurons. Hopefully that won't be fuzzy.

Mitchell: I notice that no-one has chosen to dispute or otherwise comment on my observation that the human brain gets things done, not just by virtue of being "organismic" (or embodied or fleshy or corporeal), but because its constituent neurons are arranged so as to perform elaborate and highly specific transformations of input to output

Sure. Nervous systems are cleanly divided into sensory (input) nerves and motor (output) nerves, with some kind of signal processor in between, which can range from a few ganglia to the human brain. C. elegans is a nematode worm that has exactly 302 neurons, and people have created wiring diagrams of that simple "brain". They have looked for recurring patterns of neurons, which constitute computational modules. There's even software that models all this stuff.

Scaling up, the human brain is not qualitatively different from the C. elegans brain. It is basically a signal processor, and intelligence is some part of that processing.

That's great, but right now we're only modeling single cortical columns, which are about 10,000 neurons. We have a long way to go before we translate the entire brain into algorithms. And as I pointed out before, it's not just a problem of scanning or modeling the brain, but actaully understanding how it works. We can determine the 3D structure of proteins, and we understand the dynamics of protein folding, but we can't realiably simulate it with computers.

So we could produce high resolution scans of the brain in 10 years, as Kurzweil predicts, but we have to do real empirical work to understand what the data means.

The books were received rapturously in >Hist circles, andthe author himself was warmly welcomed one of the prominentmailing lists, until his conversion (from Objectivism) toChristianity (with all that entails) made him persona nongrata.

However, Wright's science-fictional AIs (known in the books as"sophotechs") captures the flavor of the kind of AI stilldreamed of by the preponderance of >Hists.

Compare this description to the views of Gerald M. Edelman,summarized above.

-------------------------------

Sophotechs are digital and entire intelligences. Sophotechthought-speeds can only be achieved by an architecturewhich allows for instantaneous and nonlinear conceptformation. . . Digital thinking meant that there was aone-to-one correspondence between any idea and theobjects that idea was supposed to represent. All humans. . .thought by analogy. In more logical thinkers, theanalogies were less ambiguous, but in all human thinkers,the emotions and the concepts their minds used weregeneralizations, abstractions that ignored particulars.

Analogies were false to facts, comparative matters ofjudgment. The literal and digital thinking of theSophotechs, on the other hand, were matters of logic. . .

Humans were able to apply their thinking inconsistently,having one standard, for example, related to scientifictheories, and another for political theories: one standardfor himself, and another for the rest of the world.

But since Sophotech concepts were built up of innumerablelogical particulars, and understood in the fashion calledentire, no illogic or inconsistency was possible withintheir architecture of thought . Unlike a human, aSophotech could not ignore a minor error in thinkingand attend to it later; Sophotechs could not prioritizethought into important and unimportant divisions;they could not make themselves unaware of the implicationsof their thoughts, or ignore the context, true meaning, andconsequences of their actions.

The secret of Sophotech thinking-speed was that theycould apprehend an entire body of complex thought,backward and forward, at once. The cost of that speedwas that if there were an error or ambiguity anywherein that body of thought, anywhere from the most definiteparticular to the most abstract general concept, thewhole body of thought was stopped, and no conclusionsreached. . .

Sophotechs cannot form self-contradictory concepts, norcan they tolerate the smallest conceptual flaw anywherein their system. Since they are entirely self-awarethey are also entirely self-correcting. . .

Sophotechs, pure consciousness, lack any unconscioussegment of mind. They regard their self-concept with thesame objective rigor as all other concepts. The moment we concludethat our self-concept is irrational, it cannot proceed. . .

Machine intelligences had no survival instinct to overridetheir judgment, no ability to formulate rationalizations,or to concoct other mental tricks to obscure the truecauses and conclusions of their cognition from themselves. . .

Sophotech existence (it could be called life only byanalogy) was a continuous, deliberate, willful, andrational effort. . .

For an unintelligent mind, a childish mind. . . their beliefsin one field, or on one topic, could change withoutaffecting other beliefs. But for a mind of high intelligence,a mind able to integrate vast knowledge into a singleunified system of thought, Phaethon did not see howone part could be affected without affecting the whole.This was what the Earthmind meant by 'global'. . . .

[B]y saying 'Reality admits of no contradictions' . . .[s]he was asserting that there could not be a modelof the universe that was true in some places, falsein others, and yet which was entirely integrated andself-consistent. Self-consistent models either hadto be entirely true, entirely false, or incomplete."

I went Googling for Usenet and other Web commentaryon Wright's _Golden Age_ trilogy, and found some entertainingremarks. Here's one:

http://groups-beta.google.com/group/rec.arts.sf.written/msg/ecc9d27621264db0------------------Being an Objectivist may not define everything about Wrightas a writer, but it is the entirety of the ending to this trilogy.

After two and a half books of crazy-ass post-human hijinks, Wrightdeclares that the Final Conflict will be between the rationalthought-process of the Good Guys and the insane thought-process of theBad Guys. He lays out the terms. He gives the classic, unvarnishedObjectivist argument in the protagonist's voice. He does a good job ofmarshalling the usual objections to Objectivism (including mine) inthe protagonist's skeptical allies. He does a great job of describinghow *I* think the sentient mind works, and imputes it to the eviloverlord.

(Really. I was reading around page 200, thinking "This argumentdoesn't work because the human mind doesn't work that way; it workslike *this*." Then I got to page 264, and there was an excellentdescription of *this*.)

Then Wright declares that his side wins the argument, and that's theend of the story. (The evil overlord was merely insane, and iscured/convinced by Objectivism.) This is exactly as convincing asevery other Objectivist argument I've seen, which is to say "utterlyunsupported", and it quite left me feeling cheated for an ending.

If that's not writing as defined by a particular moral philosophy,what is? . . .

> I was reading around page 200, thinking "This argument> doesn't work because the human mind doesn't work that way; it works> like *this*." Then I got to page 264, and there was an excellent> description of *this*.

". . . This is an image of my mind [said the Nothing Machine]. . ."

It was not shaped like any Sophotech architecture Phaethonhad ever seen. There was no center to it, no fixed logic,no foundational values. Everything was in motion, like awhirlpool. . .

The schematic of the Nothing thought system looked like thevortex of a whirlpool. At the center, where, in Sophotechs,the base concepts and the formal rules of logic and basicsystem operations went, was a void. How did the machineoperate without basic concepts?

There was continual information flow in the spiral armsthat radiated out from the central void, and centripetalmotion that kept the thought-chains generally all pointedin the same direction. But each arm of that spiral,each separate thought-action initiated by the spinning web,each separate strand, had its own private embeddedhierarchy, its own private goals. The energy was distributedthroughout the thought-webwork by success feedback: eachparallel line of thought judged its neighbors accordingto its own value system, and swapped data-groups andpriority-time according to their own private needs.Hence, each separate line of thought was led, as if byan invisible hand, to accomplish the overall goals ofthe whole system. And yet those goals were not writtenanywhere within the system itself. They were implied,but not stated, in the system's architecture, writtenin the medium, not the message.

It was a maelstrom of thought without a core, without aheart. . . Phaethon could see many blind spots, manysections of which the Nothing Machine was not consciouslyaware. In fact, wherever two lines of thought in theweb did not agree, or diverged, a little sliver of darknessappeared, since such places lost priority. But whereverthoughts agreed, wherever they helped each other,or cooperated, additional webs were born, energy wasexchanged, priority time was accelerated, light grew.The Nothing Machine was crucially aware of any area wheremany lines of thought ran together.

Phaethon could not believe what he was seeing. It waslike consciousness without thought, lifeless life, afuriously active superintelligence with no core. . ."

-- John C. Wright,_The Golden Transcendence_

-------------------------------------

In Edelman's earlier books, the momentary state of thethalamocortical system of the brain of an organism exhibitingprimary consciousness. . . was spoken of as constantly morphinginto its successor in a probabilistic trajectory influencedboth by the continued bombardment of new exteroceptive input(actively sampled through constant movement)and by the organism's past history (as reflected by the strengthsof all the synaptic connections within and among the groups ofthe primary repertoire). [This] evolving state. . .is given a new characterization in Edelman's [later books as]the "dynamic core hypothesis" (UoC Chap. 12). . .

Edelman and Tononi give [a] visual metaphor for thedynamic core hypothesis in UoC on p. 145 (Fig. 12.1):an astronomical photo of M83, a spiral galaxy inHydra, with the caption "No visual metaphor can capture theproperties of the dynamic core, and a galaxy with complicated,fuzzy borders may be as good or as bad as any other".

> If it takes longer than ten years to be able to reanimate cryopatients. . .

Curious how Martin Striz's comment about computer simulation of biologicalsystems somehow morphed into a comment about the plausibility ofcryonics. Or perhaps not so surprising, since it seems thatthe Three Pillars of the Transhumanist Creed these days seem tobe: (1) superhuman AI, (2) nanotechnology and (3) physical immortality.Either (1) begets (2), or (2) begets (1), and (1) and (2) beget (3).

Goes the other way, too -- Melody Maxim recently complained on herblog that people who are ostensibly interested in serious discussionsabout cryonics seem to be prone to going off on tangents aboutuploading.

Saturday, October 2, 2010Cryonics and Uploadinghttp://cryomedical.blogspot.com/2010/10/cryonics-and-uploading.html

> Upon creating such a differently-intelligent being, . . .> we might attribute to such a one rights (although we seem> woefully incapable of doing so even for differently materialized> intelligences that are nonetheless our palpable biological kin --> for instance, the great apes, cetaceans).

A busy week has given me a chance to think about what, if anything, to add to this discussion. I end up first wanting to explain what this "mathematical" perspective is, and how it relates to brains and to computers. To a large extent it just means employing a physical description rather than some other sort of description, though perhaps one at such an abstract level that we just talk about "states" with little regard for their physical composition.

Focusing on a material description has different consequences for brains and computers. For a brain, it means adopting a natural-scientific language of description, mostly that of biology, and it also means you say nothing about the mind or anything mindlike. You know it's in there, somehow, but it doesn't feature in what you say. For a computer, it means stripping away the imputational language of role and function which normally pervades the discourse about computers, and returning it to its pure physicality. A silicon chip, from this perspective, doesn't contains ones and zeroes, or any other form of representational content; it's just a sculpted crystal in which little electrical currents flow.

The asymmetry arises because we know that consciousness, intelligence, personality and so forth really do have some relationship to the brain, even though, from a perspective of physical causality, it seems like these all ought to be dispensible concepts. How matter and mind relate is simply an open problem, scientifically and philosophically (a problem for which there are many proposed solutions), and this is one way to bring out the problem. For a computer, however, all we know is that the imputation of such attributes (intelligence, intentionality, etc) is a big part of how humans relate to these machines, and we even know that these machines have been designed/evolved in order to facilitate such imputation (which goes on whenever anyone employs a programming language). But we have no evidence that anything mindlike is actually there in any computing machine yet made, and most informed people seem to think it's never yet been there, though in principle this depends on one's particular theory about the mind-matter relationship.

To sum up, the asymmetry is that for brains, adoption of the strictly physical perspective brings out or highlights a mystery and a genuine unsolved problem, whereas for computers, adoption of the strictly physical perspective simply reminds us of the extent to which the human user is the one who personalizes or mentalizes the computer and its activities.

Given this context, my thesis about computation and intelligence is as follows. Regardless of where lies the boundary between "complex structured object actually possessing mentality" and "complex structured object with no actual mind, but to which mindlike traits are sometimes attributed"... the "mathematical" understanding of (i) complex systems, (ii) the powers open to a system with a particular dynamics, and (iii) how to induce a desired dynamics in a sufficiently flexible class of complex system, do all imply the artificial realizability of something functionally equivalent to intelligence, and even "superintelligence", quite independently of whether this "artificial intelligence" has all the ontological traits possessed by the real thing.

One of the points I wish to convey is that at this level of analysis, whether intelligence is realized affectively, glandularly, socially, through ceaseless re-negotiation, etc., does not make a difference. All that matters is that there are "states" and that they have certain causal relations to each other and to external influences. Even the attribution of representational significance to these states, which is ubiquitously present in ordinary theoretical computer science, can be dispensed with, without invalidating the analysis. For example, the abstract theory of algorithms is normally posed in the form of concrete problems, and procedures or programs which solve them. But all the results of that theory can be expressed in a non-intentional language such as you might use to describe purely physical, and quite "non-computational", properties.

I really need to provide an example of what I'm talking about. So, consider the perceptron. This is normally described as a type of "circuit" or "neural network", which was long ago proven incapable of performing certain "classifications". Those terms come already loaded with connotations which make them something more than "natural kinds" - there's already a bit of ready-to-hand-ness about them, an imputation of function. And if one then considers the more abstract notion of a perceptron as a type of algorithm or virtual machine, it may seem that the (usually un-remarked-upon) constructedness of the concept is even deeper and more ramified than it is when the perceptron is supposed to be a concrete device. However, all the facts - the theorems - about what perceptrons can and cannot do, can be understood in a way which is denuded of both artefactuality (that is, the presupposition of perceptron as artefact) and intentionality (that is, the ascription of any representational or other mentalistic property to the perceptron). Those theorems are facts about the possible behaviors of a physical object with a certain causal structure, valid regardless of whether that object is a neuronal pathway which develops according to gene-environment interactions which are entirely evolved rather than designed, or whether that object is a manufactured circuit, or even a "computationally universal" emulator which has been tuned to behave like a specialized circuit.

What I've provided here is not an argument for historically imminent superintelligence, more a prelude to such an argument, intended to explain why certain objections don't count. Gerald Edelman's distinction between selectionist and instructionist systems, for example, has some ontological significance, but it doesn't mean much at this para-computational level that I have tried to describe, and that is the level which matters when it comes to the pragmatic capabilities of would-be thinking systems. If you could show that a selectionist system can do something which instructionist ones can't, or that it can do them on significantly different timescales (such as the polynomial vs exponential time distinction beloved of computer scientists), that would matter in the way that the perceptron theorems "matter". But the main difference between selectionist and instructionist systems seems to be that the former are evolved and the latter are designed - and this matters ontologically, but not pragmatically, if pragmatics includes such considerations as whether an instructionist system could become an autonomous agent able to successfully resist human attempts to put it back in its box.

> Focusing on a material description. . . [f]or a brain. . .> means. . . you say nothing about the mind or anything mindlike.> You know it's in there, somehow, but it doesn't feature in what you say.

Well, no -- not necessarily. If you're of a mind (;->) with, e.g.,Edelman, you probably don't imagine you can focus **exclusively**on the mind (treating it as some sort of computer program independentof its biological basis), but you don't have to pretend that"mind talk" makes no more sense than talking about phlogiston,as the radical behaviorists tried to do. At some point,everyday talk about "the mind" (and even what purports to bemore sophisticated talk about the mind -- Edelman, e.g., doesnot dismiss Freud wholesale as some of his contemporariesdo) will have to be at least reconcilable with the purely materialdescription, especially since the "purely material description"is unlikely ever to replace "mind talk" in everyday discourse.

> [Focusing on a material description]. . . [f]or a computer. . .> means stripping away the imputational language of role and function. . .> and returning it to its pure physicality. A silicon chip,> from this perspective, doesn't contains ones and zeroes, or any> other form of representational content; it's just a sculpted crystal> in which little electrical currents flow.

Though, of course, it's precisely the fact that a computer **can**be treated purely as an abstract entity consisting of **nothing** but"ones and zeroes", or described in the abstract PMS (processor, memory, switch)notation used in Gordon Bell and Allen Newell's _Computer Structures,Readings and Examples_, that makes the role of a computer's physicalbasis (1) non-negligibly different from the physicality of a biological brain,at least in the view of neuroscientists such as Edelman, and(2) almost disposable, in a sense. Whether a particulardigital computer's architecture (in precisely Bell & Newell's abstractsense of that word) is physically realized by a bunchof "sculpted crystals" housed in a small box plugged into an ordinarywall outlet, or consists of racks of evacuated glass bottles with glowingfilaments needing massive amounts of air conditioning and a dedicatedelectrical substation, is of no consequence to the programmer ordesigner of algorithms. When the IBM 709, consisting of the glass bottles,was replaced by the IBM 7090, consisting of the crystals, theprograms continued to run unmodified. Yes, the people whodesign and make the physical objects (or pay for them, or worry abouthousing, cooling, and providing electricity for them) have toworry mightily about the physical details, but most certainlythe programmers do **not** (unless, of course, an expansion of theabstract architecture -- a bigger address space, for instance --is made possible by a change in the physical construction techniques).

That's a difference that makes a difference, and it's an example ofthe vast qualitative gap that still exists between the mostsophisticated artifacts, and biological "machines"(even the use of the word "machine" in the context of biology canbe profoundly misleading to the unwary).

> [W]hat this "mathematical" perspective is, and how it relates to brains> and to computers. . . just means employing a physical description. . .> at such an abstract level that we just talk about "states" with> little regard for their physical composition. . . All that matters> is that there are "states" and that they have certain causal relations> to each other and to external influences.

Talking about an "abstract level" with "little regard for physical composition"is something that we demonstrably **can do** with computers. It is notyet something we can do with biological brains (or at least not yet do**usefully**, a generation of "cognitive psychologists" notwithstanding).

And even using the word "state" in this context (with its associationsof "finite-state automaton") skates awfully near to begging thequestion (of whether biological intelligence can be replicated bya digital computer). Also, the word "mathematical", in this context,carries associations both of "amenable to formal analysis" and"inherently replicable on a digital computer". Maybe, and maybenot.

> [W]e have no evidence that anything mindlike is actually> there in any computing machine yet made. . . though in principle> this depends on one's particular theory about the mind-matter relationship.

Yes, the same observation could be made about the beliefs of people whotake the adjective in the phrase "pet rocks" literally,or those who talk to their houseplants. Also, I'mreminded of a remark made by Bertrand Russell, in a recordingof a 1959 interview, elucidating his views on the commonbelief in an afterlife, that "the relationship betweenbody and mind, **whatever** it is, is much more **intimate**than is commonly supposed". This isn't a hypothesis thathas lost any likelihood in the past 50 years.

> For a computer. . . the imputation of such attributes (intelligence,> intentionality, etc) is a big part of how humans relate to these machines. . .

One can only hope that is less true in 2010 than it was in 1950(the era of "thinking machines" being written about in the magazinesand newspapers by awe-struck journalists) or in 1967 when Joseph Weizenbaumwrote ELIZA. I suspect that illusion has worn pretty thin by now,since most everybody these days has had more than enough personal experiencewith PCs, cell phones, and other gizmos incorporating more processingpower than most mainframes in 1967.

> [W]e even know that [computers] have been designed/evolved in> order to facilitate such imputation (which goes on whenever anyone> employs a programming language).

Well, no. I'm a programmer, and I'm well aware of the rather strainedanalogy perpetrated by the use of the term "language" to describe thecode on display in another window on my screen as I type this. Also,artifacts don't exactly "evolve" yet (unless you take the tongue-in-cheekdisquisition in Samuel Butler's "The Book of the Machines" in _Erewhon_more literally than the author did). Jaron Lanier, for one, claimsthat software which has been designed to "facilitate suchimputation" is so much the worse for it, and if you've ever struggledwith Microsoft Word to prevent it from doing your capitalizationfor you, you know exactly what he means.

> [C]onsider the perceptron. This is normally described as a type> of "circuit" or "neural network", which was long ago proven incapable> of performing certain "classifications".

Interesting you should mention that rather sordid episode in thehistory of AI. Yes, Frank Rosenblatt was (according to the accountsI've read) something of a tinkerer and a self-promoter, in contrastto the more reputable brains at MIT he pitted himself againstfor funding. But I've read that Minsky and Papert's analysis of theinadequacies of the perceptron also turned out to be flawed, though this wasn'tdiscovered, or publicized, before the analog network approach to AI had beenthoroughly discredited. Afterwards, non-symbolic approaches to AIs kepta very low profile for more than a decade, when so-called"artifical neural networks" (ANNs) reappeared in the 80s(as digital simulations made feasible by the relatively cheaper hardwareavailable by that time), and as exemplified by the publication ofRumelhart & McClelland's _Parallel Distributed Processing_.

It has been suggested that Rosenblatt may have committed suicide laterin life, though even if that is indeed how he met his end, the connectionbetween that and his humiliation at the hands of his symbolic-AIrivals could certainly never be proved. Still, the suspicion lingers,as does the rumor of a purely political motivation forthe "necessary" discrediting of analog-network research:1) the fact that the digital computers were new, exceedinglyattractive, and exceedingly high-status "toys" and 2) the factthat digital computers were so expensive that those who neededto justify their purchase could not afford to have thestrength of their funding arguments be diluted by the suggestionthat there were alternative (perhaps cheaper) approachesto certain classes of problems (sc. "artificial intelligence")that digital computers could purportedly solve. Ah well, suchis academic Realpolitik.

Though non-symbolic, a modern digitally-simulated ANN still exemplifieswhat Edelman would call "instructionism" rather than "selectionism", and would not,in his view, suffice to replicate a biological brain.

> If you could show that a selectionist system can do something which> instructionist ones can't, or that it can do them on significantly> different timescales. . ., that would matter. . .> But the main difference between selectionist and instructionist systems> seems to be that the former are evolved and the latter are designed -> and this matters ontologically, but not pragmatically. . .

The pragmatic difference is that "selectionist systems" (using that phraseas a shorthand for "the way biological brains actually work, whatever it is"),is means of producing "intelligence" that has an existence proof.**We're** here. Of course while "selectionist systems", in thespecific sense of Edelman's theories, **may** turn out to bea good model for biological brains -- and he's not the only "selectionist"neuroscientist, there's at least one other named Jean-Pierre Changeux,and there are doubtless more -- that model is far from universally accepted,or even particularly well defined.

"Instructionist" approaches to AI haven't worked after 60 years oftrying. And the purely **symbolic** approach to artificial intelligence(referred to these days by the mocking acronym GOFAI, for"Good Old-Fashioned AI") seems to be completely bankrupt.Douglas Lenat's Cyc was GOFAI's last gasp, and it hasn't yetmanaged to produce HAL in all the time since the days when it wasSunday supplement reading material back in -- when, the early 90s? Before the Web, anyway. (Lenat, of course, now claims that hisintent never was to produce HAL-like AI; that was just journalisticexaggeration.) Hope springs eternal, of course. Especially, itseems, among certain crackpot amateurs.

There is a curious antipathy to the notion of evolutionarily-produced,self-organizing artificial systems among many "hard-nosed" physicalscience types and also among many transhumanists. Marvin Minsky himself hasdisparaged the idea (and may still do so) as, more or less, hoping thatyou can get something to work without taking the trouble to figureout in advance how it's actually supposed to (as if that were "cheating"somehow, or more likely, I suppose, in his view a kind of magical thinking).The Ayn Rand acolytes don't like the idea (partly for ideologicalreasons), and some of the Singularitarians think self-organizingAI would be a recipe for disaster -- they seem to take it for grantedthat another kind of AI -- something like GOFAI, with algorithmically-guaranteed"friendliness", is not only preferable, but possible in the first place.Paraphrasing Kate Hepburn in _The African Queen_, "Evolution, Mr. Allnut, iswhat we are put in this world to rise above."

> [M]y thesis about computation and intelligence is [that]. . .> the "mathematical" understanding of (i) complex systems,> (ii) the powers open to a system with a particular dynamics,> and (iii) how to induce a desired dynamics in a sufficiently flexible> class of complex system, do all imply the artificial realizability> of something functionally equivalent to intelligence. . .

When you put it this way, I'd have to agree with you, except thatthe word "imply" suggests a logical inevitability that may beoverly optimistic. My "beef" with the transhumanists is that,perhaps because of temperamental or ideological commonalitiesamong them, they seem to get dragged inevitably into a retroview of how "intelligence" works, well in arrears of the cutting-edgethinking among actual scholars in the relevant fields.A lot of them are still thinking in terms of GOFAI, and a lotof them are harboring views of how the human mind works (or "ought"to work) that hark back to the days of General Semantics,Dianetics, and Objectivism -- a "philosophy" claimingthat the way a digital computer "thinks" is actually**superior** to messy human thought processes. I'll spare youthe relevant _Star Trek_ quotes, as well as any hypothesesabout the psychological basis of all this. There are also,both annoyingly and hilariously, self-styled "geniuses" andauto-didacts among the transhumanists who seem to believe that they canre-create whole fields of scholarship quite outside of theirown expertise -- epistemology, ethical and politicaltheory -- based on their armchair speculations about AI.

> . . .quite independently of whether this "artificial intelligence"> has all the ontological traits possessed by the real thing.

There we part company, if you think you know in advance which"ontological traits" may or may not be necessary. I can onlyrepeat Edelman's warning here:

"[W]hile we cannot completely dismiss a particular material basisfor consciousness in the liberal fashion of functionalism,it is probable that there will be severe (but not unique)constraints on the design of any artifact that is supposed toacquire conscious behavior. Such constraints are likely toexist because there is every indication that an intricate,stochastically variant anatomy and synaptic chemistryunderlie brain function and because consciousness isdefinitely a process based on an immensely intricate and unusualmorphology" (RP pp. 32-33).

"Severe but not unique" rather than "quite independently".Sounds plausible to me, though of course YMMV.

> . . .all imply the artificial realizability of something> functionally equivalent to intelligence, and even> "superintelligence" . . . [Though] [w]hat I've provided here> is not an argument for historically imminent superintelligence,> more a prelude to such an argument. . .

Yes, well, the Singularitarian arguments about the ramp-up to"superintelligence" (starting with Vernor Vinge's) suggesta rather friction-free process whereby a slightly smarter-than-humanAI can examine its own innards and improve them.Lather, rinse, repeat, and boom! Voilà la Singularité.This suggests an AI consisting of "code" that can be optimizedby inspection. Again, a GOFAI-tinged view of things.

Almost ten years ago, one Damien Sullivan posted the followingamusing comment on the Extropians list:

> I also can't help thinking at if I was an evolved AI I might not thank my > creators. "Geez, guys, I was supposed to be an improvement on the human > condition. You know, highly modular, easily understadable mechanisms, the > ability to plug in new senses, and merge memories from my forked copies. > Instead I'm as fucked up as you, only in silicon, and can't even make backups > because I'm tied to dumb quantum induction effects. Bite my shiny metal ass!"

... in three to eight years we will have a machine with the generalintelligence of an average human being ... The machine will beginto educate itself with fantastic speed. In a few months it will beat genius level and a few months after that its powers will beincalculable ...

You can't get from materialism or consensus science advocacy to futurology, let alone superlative futurology.

Confronted with criticism in respect to the techno-transcendentalizing wish-fulfillment fantasies that are unique to and actually definitive of the Robot Cultists they

either

provisionally circle the wagons and reassure one another through rituals of insistent solidarity (sub(cult)ural conferences, mutual citation) to distract themselves from awareness of their marginality,

or

they retreat to mainstream claims (effective healthcare is good, humans are animals not angels) that nobody has to join a Robot Cult to grasp and few but Robot Cultists would turn to Robot Cultists to hear discussed to distract critics from awareness of their marginality.

> My "beef" with the transhumanists is that, perhaps because> of temperamental or ideological commonalities among them,> they seem to get dragged inevitably into a retro view> of how "intelligence" works, well in arrears of the cutting-edge> thinking among actual scholars in the relevant fields.> A lot of them are still thinking in terms of GOFAI, and a lot> of them are harboring views of how the human mind works (or "ought"> to work) that hark back to the days of General Semantics,> Dianetics, and Objectivism -- a "philosophy" claiming> that the way a digital computer "thinks" is actually> **superior** to messy human thought processes.

[Gerald] Edelman. . . treats the body, with itslinked sensory and motor activity, as an inseparablecomponent of the perceptual categorization underlyingconsciousness. Edelman claims affinity (in BABF, p. 229) betweenhis views on these issues and those of a number of scholars (aminority, says Edelman, which he calls the Realists Club) in thefields of cognitive psychology, linguistics, philosophy, andneuroscience; including John Searle, Hilary Putnam, Ruth GarretMillikan, George Lakoff, Ronald Langacker, Alan Gould, BennyShanon, Claes von Hofsten, and Jerome Bruner (I do not know ifthe scholars thus named would acknowledge this claimed affinity).

"My late friend, the molecular biologist Jacques Monod,used to argue vehemently with me about Freud, insistingthat he was unscientific and quite possibly a charlatan.I took the side that, while perhaps not a scientist inour sense, Freud was a great intellectual pioneer,particularly in his views on the unconscious and itsrole in behavior. Monod, of stern Huguenot stock, replied,'I am entirely aware of my motives and entirely responsiblefor my actions. They are all conscious.' In exasperationI once said, 'Jacques, let's put it this way. EverythingFreud said applies to me and none of it to you.'He replied, 'Exactly, my dear fellow.'"

-- Gerald M. Edelman

When Ayn [Rand] announced proudly, as she often did, 'I canaccount for every emotion I have' -- she meant, astonishingly,that the total contents of her subconscious mind wereinstantly available to her conscious mind, that all of heremotions had resulted from deliberate acts of rationalthought, and that she could name the thinking thathad led her to each feeling. And she maintained thatevery human being is able, if he chooses to work at thejob of identifying the source of his emotions, ultimatelyto arrive at the same clarity and control.

-- Barbara Branden, _The Passion of Ayn Rand_pp. 193 - 195

From a transhumanist acquaintance I oncecorresponded with:

> Jim, dammit, I really wish you'd start with> the assumption that I have a superhuman> self-awareness and understanding of ethics,> because, dammit, I do.

"The question isn't whether AGI or radical longevity are possible someday, far in the future, but whether there is any rational justification for organizing your life around such expectations today (ie, being a self-professing and practicing transhumanist)."

"So we could produce high resolution scans of the brain in 10 years, as Kurzweil predicts, but we have to do real empirical work to understand what the data means."

It sounds like you are arguing that, while transhuman goals like uploading and superintelligence do have a high probability of eventually occurring, it is far enough in the future to be irrelevant to our daily lives because we cannot possibly profit from the idea personally.

Given that cryonics tends to be considered a subset of transumanism, it seems to be a relevant counter-example where one might benefit rather extraordinarily well by organizing one's life around eventual technical possibilities.

Given that cryonics tends to be considered a subset of transumanism, it seems to be a relevant counter-example where one might benefit rather extraordinarily well by organizing one's life around eventual technical possibilities.

Classic. Not a single revived corpsicle and yet this scam is taken by a faithful Robot Cultist as a "counter-example" to skepticism about the even more techno-transcendental wish-fulfillment fantasy and Robot Cult article of faith that super-longevity via cyberspatial angel-upload or cyborgization is really and for true plain common sense.

Coming from a perspective of computer science, and I have a few thoughts.

1) I find the distinction being made between selectionist and instructionist models of intelligence to be a misleading one on multiple levels.

There's the conceptual one, of course. This is a classic instance of failing to realize consciousness for the metaphor it is. John Searle's Chinese Room thought experiment, plus Douglas Hofstadter's commentary on it really helped me to understand this. We can call it instructionist when you look at all the little detailed things a computer does, FROM THE PERSPECTIVE OF THAT ALGORITHM, but what if you put all that inside of a nice black box, and just look at the output?

There's a more substantive claim here too, of course, but even it is really a matter of degree.

Yes, most programs used in say, robotics, basically start a loop and use hard-coded instructions plus maybe a bit of logical flow-control to dictate what happens. And you can call that "instructionist". But simply introduce a layer of abstraction. Don't tell the program exactly what to do; instead provide an initial seed, and let it go off in different directions based on input. To my knowledge, the latest research in machine learning algorithms are doing just such work.

Of course, generalizing is really what makes this difficult (and is where human intelligence really succeeds), so I don't mean to write off the work and thought that will have to go into producing the kind of abstraction, templating, meta-programming, recursion--whatever it takes to make this work.

2) Honestly, though, I am unconvinced that there exist algorithms that can perform these kinds of generalized tasks in a reasonable amount of time, and I would expect that to be the primary issue with developing artificial intelligence at this stage. We either need some ridiculously good heuristics and exploitation of mathematical quirks, or maybe we can make really stupid "intelligence".

Then again, quantum computers seem to be the up-and-coming thing, and they would probably provide the efficiency to make these kinds of algorithms workable.

3) Still, though, exactly what you would see looking at this black box from the outside is unclear. Would it necessarily look like a super-intelligence? Would it have a unique form of consciousness arising from its artificial and not glandular nature? And what of a body? An AI need not even have one, and could develop very different along very different lines as such (e.g. no need to reproduce). Might it be infantile, or even a sort of blank slate?

Sometimes I think that even humans are pretty near to blank slates at birth; I suppose an AI could have the potential to be a true one, if it were so programmed.

This raises unendingly interesting questions about us humans. There is a thesis that technology is inherently neutral; this is true as far as it goes, but certain technologies are designed with specific purposes in mind. For instance, the tools that exist for farming today are all designed for large-sale agribusiness in mind, and are terribly inefficient on the small-scale. If an analogy can be sustained long enough between human beings and technology, I wonder what humans are best suited to.

What I nice circle that closes. I suppose this is where, intellectually, I believe transhumanism has a place. If it is possible to do so, we should try to make ourselves better suited to the purposes we have laid out.

Sometimes I think that even humans are pretty near to blank slates at birth

There's no good reason to think so.

There is a thesis that technology is inherently neutral; this is true as far as it goes, but certain technologies are designed with specific purposes in mind.

No technique or artifact is neutral -- but the ways in which it is not are not determined entirely by the intentions of its designers.

I believe transhumanism has a place.

If you mean by such a place, say, on late night boner pill informercials, in garages or basements where addled uncles do experiments with Radio Shack computers to square the circle, or in courthouses under investigation for possible fraud, then I agree with you that transhumanism has a place.

If it is possible to do so, we should try to make ourselves better suited to the purposes we have laid out.

As every educator and ethician will agree. If that lends comfort to GOFAI dead-enders, it shouldn't.

Snark aside, I enjoyed your contribution and appreciated your efforts. For me these questions are interesting mostly in connection with the question whether nonhuman animals deserve moral and legal standing (I say many do) and the question whether a materialist account of mind makes nonbiological more plausible or less so (I say neither, but definitely not more so).