tag:blogger.com,1999:blog-5956838.post1207134176486984385..comments2016-12-08T15:13:15.622-08:00Comments on amor mundi: Singularitarian AgonyDale Carriconoreply@blogger.comBlogger11125tag:blogger.com,1999:blog-5956838.post-15453718203621094262013-10-12T15:21:50.963-07:002013-10-12T15:21:50.963-07:00Many years have passed since this exchange occurre...Many years have passed since this exchange occurred and I am forced to confess, here at the end of history in Techno-Heaven in my prosthetically barnacled comic book model-hott sooper-bod next to the sexbot orgy pit and my god-plated nano-poop pile under the loving beneficent ministration of the post-parental Robot God, I was so wrong to doubt the sooper-brained sooper-scientists of the Robot Cult. Dale Carricohttp://www.blogger.com/profile/02811055279887722298noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-64450863119960223282007-11-05T11:06:00.000-08:002007-11-05T11:06:00.000-08:00Does this have any relevance to the discussion, re...Does this have any relevance to the discussion, re: conscious machines?<BR/><BR/>EU project for autonomous artificial systems <BR/>[Date: 2007-08-27] <BR/><BR/>Scientists in Spain have developed the first artificial cerebellum for robotics. <BR/><BR/>The project will demonstrate how a naïve system can bootstrap its cognitive development by constructing generalization and discovering abstractions with which it can conceptualize its environment and its own self. <BR/><BR/>The overall goal is to incorporate the cerebellum into a robot designed by the German Aerospace Centre in two year's time. <BR/><BR/>The four-year project, dubbed Sensopac (SENSOrimotor structuring of perception and action for emerging cognition) is funded by the EU under its Sixth Framework Programme (FP6) and brings together physicists, neuroscientists and electronic engineers from leading universities in Europe. <BR/><BR/>The scientists at the University of Granada are focusing on the design of microchips that incorporate a full neuronal system, emulating the way the cerebellum interacts with the human nervous system. <BR/><BR/>The SENSOPAC project will combine machine learning techniques and modelling of biological systems to develop a machine capable of abstracting cognitive notions from sensorimotor relationships during interactions with its environment, and of generalising this knowledge to novel situations. <BR/><BR/>Through active sensing and exploratory actions the machine will discover the sensorimotor relationships and consequently learn the intrinsic structure of its interactions with the world and unravel predictive and causal relationships. Together with action policy formulation and decision making, this will underlie the machine’s abilities to create abstractions, to suggest and test hypotheses, and develop self-awareness. <BR/><BR/>The continuous developmental approach will combine self-supervised and reinforcement learning with motivational drives to form a truly autonomous artificial system. <BR/><BR/>Throughout the project, continuous interactions between experimentalists, theoreticians, engineers and roboticists will take place in order to coordinate the most rigorous development and testing of a complete artificial cognitive system. <BR/><BR/>The overall aims of the SENSOPAC project are to: <BR/>• Develop real-time neuromorphic and computing platforms for cognitive robotics. <BR/>• Develop methodologies to investigate cognition in the brain <BR/>• Build a physical system for haptic cognition <BR/>• Improve our understanding of the neurobiological substrate for action-perception systems <BR/>• Understand the sensorimotor foundation of perception and cognition <BR/><BR/>SENSOPAC is funded under the EU Framework 6 IST Cognitive Systems Initiative. It will take 4 years from January 1st, 2006 and its 12 participants come from 9 different countries. <BR/><BR/>Project Team <BR/><BR/>Dr. Patrick van der Smagt <BR/>Bionics Group <BR/>Institute of Robotics and Mechatronics <BR/>German Aerospace Center <BR/>P.O. Box 1116 <BR/>82230 Wessling, Germany <BR/><BR/>Tasks: <BR/>» Speaker of the scientific board <BR/>» Developing an artificial robotic skin in SENSOPAC <BR/>» Developing a robotic antagonistic hand-arm system in SENSOPAC <BR/>» Responsible for WP3 and WP6 <BR/><BR/>Dr. Eduardo Ros <BR/>Department of Computer Architecture and Technology <BR/>ETSI Informatica, University of Granada <BR/>E-18071 <BR/>Spain <BR/><BR/>Tasks: <BR/>» Bio-inspired circuits implementation, reconfigurable hardware (FPGA), neuromorphic engineering, spiking neurons computation, computer vision, neural networks, real-time processing and embedded systems. <BR/><BR/>Prof. C.I. de Zeeuw <BR/>Department of Neuroscience <BR/>Erasmus MC <BR/>Dr. Molewaterplein 50 <BR/>3015 GE Rotterdam <BR/>P.O. Box 2040 <BR/>3000 CA Rotterdam <BR/>The Netherlands <BR/><BR/>Tasks: <BR/>» Consortium leader <BR/>» Expertise in cerebellar physiology, anatomy and molecular biology <BR/><BR/>Dr. Sethu Vijayakumar <BR/>Institute of Perception, Action & Behavior, <BR/>University of Edinburgh, <BR/>JCMB 2107F, The King's Buildings, Mayfield Road, <BR/>Edinburgh EH9 3JZ. <BR/><BR/>Tasks: <BR/>» Member of the scientific board <BR/>» Basic research in the areas of statistical machine learning, motor control, supervised learning in connectionist models and computational neuroscience <BR/><BR/>Dr. Angelo Arleo <BR/>Laboratory of Neurobiology of Adaptive Processes <BR/>Department of Life Science <BR/>CNRS - University Pierre&Marie Curie <BR/>box 14, 9 quai St. Bernard, 75005 Paris, France <BR/><BR/>Tasks: <BR/>» Neural encoding/decoding of hapic data <BR/>» Neural information processing/transfer at the granular layer of the cerebellum <BR/><BR/>Dr Michael Arnold <BR/>Altjira SA <BR/>Via Cattedrale 9 <BR/>6900 Lugano <BR/>Switzerland <BR/><BR/>Tasks: <BR/>» Altjira provides solutions for the modeling of large and complex systems and the embedding of these models into real-world applications <BR/><BR/>Egidio D'Angelo <BR/>Dipartimento di Scienze Fisiologiche Cellulari e Moleculari <BR/>Sezione die Fisiologia Generale e Biofisica Cellulare <BR/>Universita' di Pavia <BR/>Via Forlanini 6 <BR/>I-27100 Pavia, Italy <BR/><BR/>Tasks: <BR/>» Member of the scientific board <BR/>» The department is involved in teaching courses in Physiology, Biophysics and Neurobiology of the Faculty of Sciences and coordinates the Master Degree in Neuroscience at University of PAVIA <BR/><BR/>Dana Cohen, PhD <BR/>The Gonda Interdisciplinary Brain Research Center, Room #410 <BR/>Bar Ilan University <BR/>Ramat Gan, 52900 Israel <BR/><BR/>Tasks: <BR/>» Chronic multielectrode single-unit recordings in behaving rodents, Sensorimotor learning <BR/><BR/>Pr Carl-Fredrik Ekerot <BR/>Dept of Experimental Medical Science <BR/>Lund University <BR/>BMC F10 <BR/>S-22184 Lund, Sweden <BR/><BR/>Tasks: <BR/>» Electrophysiological investigations of the cerebellar neuronal circuits in vivoMark Thompsonhttp://www.blogger.com/profile/18330451886748699810noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-21377317019197162412007-10-08T01:48:00.000-07:002007-10-08T01:48:00.000-07:00Dale and all readers of this blog, it may be of fu...Dale and all readers of this blog, it may be of future interest to take note of the claims of the Singularitarians below and my counter-claims. <BR/><BR/><BR/>_____________________________________________________________________<BR/><BR/>All claims were as at: 08 Oct, 2007.<BR/><BR/>*Singularitarian claim: General intelligence without consciousness is possible<BR/>*Marc Geddes claim: General intelligence without consciousness is impossible<BR/><BR/>*Singularitarian claim: The existence of RPOP’s (powerful optimization processes) such as corporations proves that non-sentient general intelligence is possible<BR/>*Marc Geddes claim: The existence of RPOP’s such as corporations only proves that narrow (non-general) non-sentient intelligence is possible. <BR/><BR/>*Singularitarian claim: There is no objective morality<BR/>*Marc Geddes claim: This *is* an objective morality<BR/><BR/>*Singularitarian claim: The ultimate basis of morality is Volition (Liberty)<BR/>*Marc Geddes claim: The ultimate basis of morality is Aesthetics (Beauty) <BR/><BR/>*Singulartarian claim: Bayesian Induction is the ultimate basis of reasoning<BR/>*Marc Geddes claim: Reflective Possibility Theory is the ultimate basis of reasoning<BR/>http://en.wikipedia.org/wiki/Possibility_theory<BR/><BR/>*Singularitarian claim: Reasoning is ultimately grounded in probabilities and causal relations<BR/>*Marc Geddes claim: Reasoning is ultimately grounded in possibilities and ontological archetypes.<BR/><BR/>*Singularitarian claim: True explanations are based on patterns – predicting what will happen next<BR/>*Marc Geddes claim: True explanations are based on knowledge integration – the translation from one modelling language (means of knowledge representation) into a different modelling language.<BR/><BR/>*Singularitarian claim: Reductive materialism is true. Physical properties are all that exist and all mental concepts are human fictions, reducible to physical facts.<BR/>*Marc Geddes claim: Reductive materialism is false. Whilst its true that physical substances are the base level, non-material properties exist and mental concepts (whilst composed of physical things) have objective existence over and above the physical and are not reducible solely to physical facts (property dualism).<BR/><BR/>*Singularitarian claim: Infinite sets don’t exist<BR/>*Marc Geddes claim: Infinite sets do exist<BR/><BR/><BR/>Quite a few specific claims with a clear difference between their claims and mine wouldn’t you say? Remember the claims dear readers, and ultimately either they (Singularitarians) or me will be proven to be none too bright ;)Marc_Geddeshttp://www.blogger.com/profile/07742429508206464486noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-41368207379113834132007-10-08T00:30:00.000-07:002007-10-08T00:30:00.000-07:00As you've said Dale, for people that are supposed ...As you've said Dale, for people that are supposed to be 'super-duper-ultra-uber' geniuses etc etc these guys seem to be 'remarkably dim' in some areas.<BR/><BR/>How can any-one take Peter Voss (well known Singularitarian) seriously after learning that he got his insights about epistemology from Ayn Rand?(seriously, he says so on his web-site) What an absolute joke!<BR/><BR/>Eliezer Yudkowsky (chief Singularitarian guru) supposedly was an AI researcher from 1996, yet he has states (roughly paraphrasing):<BR/><BR/>'I didn't know about Bayesian reasoning until 2000' <BR/><BR/>and<BR/><BR/>(when I questioned him about Godel and a puzzle with Godel reflection in 2004 or thereabouts) E.Yudkowsky replied to me as follows:<BR/><BR/>'Oh I haven't done mathematical logic yet'<BR/><BR/>--<BR/><BR/>It's really clear from his comments about math on SL4 that Yudkowsky was clueless about the nature of mathematics.<BR/><BR/>Look at my MCRT Domain Model at link here for instance:<BR/><BR/>http://groups.google.com/group/everything-list/web/mcrt-domain-model-eternity<BR/><BR/>The boxes down the right-hand side of my diagram represent math knowledge and contain the key insight - that computer programming is really a branch of mathematics and ontology/dp modelling languages are the true 'languages of logic'.<BR/><BR/>Again, it's blatantly clear that Yudkowsky was clueless about these 'mission critical' insights as recently as 2004. <BR/><BR/>---<BR/><BR/>The Singularitarians are most likely horribly mistaken about several of their key contentions - namely the idea that you can have real general intelligence without consciousness.<BR/><BR/>Need I say more? The list of weird gaps in knowledge, unfounded assumptions and very basic errors and ommissions displayed by 'Singularitarians' goes on and on and on.<BR/><BR/>Methinks some of them need to go back to school (their chief is after all, a high-school drop-out).<BR/><BR/>CheersMarc_Geddeshttp://www.blogger.com/profile/07742429508206464486noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-82743945759787913602007-10-07T22:31:00.000-07:002007-10-07T22:31:00.000-07:00I'll give this to Michael. He may stubbornly resi...I'll give this to Michael. He may stubbornly resist persuasion by my stunning arguments, but at least he gets a lot of the jokes.Dale Carricohttp://www.blogger.com/profile/02811055279887722298noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-66426923871773589282007-10-07T14:55:00.000-07:002007-10-07T14:55:00.000-07:00Just wanted to say that I thought the title of thi...Just wanted to say that I thought the title of this post was funny!Michael Anissimovhttp://www.blogger.com/profile/06217926458888484768noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-7609177401328136272007-10-07T10:28:00.000-07:002007-10-07T10:28:00.000-07:00Aleksei Riikonen, habitue of SL4 and self-appointe...Aleksei Riikonen, habitue of SL4 and self-appointed<BR/>Defender of the Faith on the WTA-talk list and elsewhere,<BR/>wrote (to Dale):<BR/><BR/>> I'm not the one making confident predictions here, *you* are.<BR/><BR/>Dale is confidently predicting that the singularitarians'<BR/>confident predictions will probably turn out to be wrong.<BR/>Such negative predictions are among the few <BR/>for whose confidence there exists a sound basis<BR/>in historical evidence.<BR/><BR/>Or, as Bertrand Russell put it:<BR/><BR/>"This world is one in which certaintly is not ascertainable.<BR/>If you think you've achieved certainty, you're almost<BR/>certainly mistaken. That's one of the few things you can be certain<BR/>about."<BR/><BR/>> You are confident enough of Brain Emulation being unable to produce a<BR/>> human-equivalent cognitive system, that you label as a Robot Cult<BR/>> those of us who think that such a development is a possibility that<BR/>> can't be comfortably ruled out.<BR/><BR/>Now here's a very interesting switcheroo. Suddenly we're talking<BR/>about "Brain Emulation" -- whatever the hell that is. What **I'm**<BR/>taking it to mean is "simulation, using a digital computer, of the<BR/>physical aspects of biological nervous systems that make them<BR/>able to do what they do."<BR/><BR/>In other words, simulating by computer the "unusual morphology"<BR/>that Gerald Edelman refers to in the following passage:<BR/><BR/>"[Are] artifacts designed to have primary consciousness...<BR/>**necessarily** confined to carbon chemistry and, more specifically,<BR/>to biochemistry (the organic chemical or chauvinist position)[?]<BR/>The provisional answer is that, while we cannot completely<BR/>dismiss a particular material basis for consciousness in the<BR/>liberal fashion of functionalism, it is probable that there will<BR/>be severe (but not unique) constraints on the design of any<BR/>artifact that is supposed to acquire conscious behavior. Such<BR/>constraints are likely to exist because there is every indication<BR/>that an intricate, stochastically variant anatomy and synaptic<BR/>chemistry underlie brain function and because consciousness is<BR/>definitely a process based on an immensely intricate and unusual<BR/>morphology" (_The Remembered Present_, pp. 32-33).<BR/><BR/>Now, this is indeed one of the few plausible approaches to AI<BR/>using digital computers, IMHO, **assuming** that there's<BR/>enough "room at the bottom" (as R. P. Feynman once put it) to<BR/>ever make digital systems capable of enough number crunching<BR/>to simulate all that biochemistry and biophysics (even in real time,<BR/>let alone thousands or millions of times faster than real time).<BR/><BR/>However, given the dependence of biological brains on "emergent"<BR/>phenomena, those singularitarians determined to "guarantee"<BR/>(as in make a watertight mathematical case for -- but watertight<BR/>to whom, one wonders) Friendliness (TM), have always taken an<BR/>extremely dim view of what I'm taking to be what you mean by<BR/>"Brain Emulation", as exemplified by Michael Wilson in an<BR/>overheated post on SL4 from April, 2004:<BR/><BR/>"To my knowledge Eliezer Yudkowsky is the only person that has tackled <BR/>these issues head on and actually made progress in producing engineering <BR/>solutions (I've done some very limited original work on low-level <BR/>Friendliness structure). Note that Friendliness is a class of advanced <BR/>cognitive engineering; not science, not philosophy. We still don't know <BR/>that these problems are actually solvable, but recent progress has been <BR/>encouraging and we literally have nothing to loose by trying [unintentional<BR/>ha-ha -- JF]. I sincerely hope that we can solve these problems, stop Ben Goertzel<BR/>and his army of evil clones (I mean emergence-advocating AI researchers :) and<BR/>engineer the apothesis. The universe doesn't care about hope though, so I will <BR/>spend the rest of my life doing everything I can to make Friendly AI a <BR/>reality. Once you /see/, once you have even an inkling of understanding <BR/>the issues involved, you realise that one way or another these are the <BR/>Final Days of the human era and if you want yourself or anything else you <BR/>care about to survive you'd better get off your ass and start helping. <BR/>The only escapes from the inexorable logic of the Singularity are death, <BR/>insanity and transcendence."<BR/><BR/>Another problem with "Brain Emulation" was pithily summed up by<BR/>Damien Sullivan on the Extropians' list back in 2001:<BR/><BR/>> I also can't help thinking at if I was an evolved AI I might not thank my <BR/>> creators. "Geez, guys, I was supposed to be an improvement on the human <BR/>> condition. You know, highly modular, easily understadable mechanisms, the <BR/>> ability to plug in new senses, and merge memories from my forked copies. <BR/>> Instead I'm as fucked up as you, only in silicon, and can't even make backups <BR/>> because I'm tied to dumb quantum induction effects. Bite my shiny metal ass!" <BR/><BR/>In other words, the slippery positive-feedback loop of "recursive<BR/>self-improvement" via an AI examining and improving its own "code"<BR/>might not be so well-oiled, after all. In fact, it might be<BR/>full of sand.<BR/><BR/>> If you are ever able to move past your apparent need to ridicule as<BR/>> "Robot Cultishness" the questioning of some of your assumptions, let<BR/>> me know.<BR/><BR/>"He is a man with tens of thousands of blind followers. It is my<BR/>business to make some of those blind followers see."<BR/><BR/>-- Abraham Lincoln on the covertly proslavery, and amoral,<BR/>Stephen Douglas<BR/><BR/>(This is an epigraph from a book I bought yesterday --<BR/>_Evil Genes: why Rome Fell, Hitler Rose, Enron Failed, and<BR/>My Sister Stole My Mother's Boyfriend_ by Barbara Oakley.<BR/>A book not unrelated to the thread of this discussion.)<BR/>http://www.amazon.com/Evil-Genes-Hitler-Mothers-Boyfriend/dp/159102580X<BR/><BR/>BTW, here's an interesting passage I came across while rooting through<BR/>my e-mail archives, written by a very smart guy who used to participate<BR/>on the Extropians' list (but whose name you would almost certainly<BR/>not recognize):<BR/><BR/>"[Singularitarians] and friends, because their ideology is mostly centered<BR/>on producing a bad cross between philosophical maundering and hints in<BR/>the direction of scientific hypotheses in the style of Dickens's<BR/>Pickwick club, all dogmatically and messianically sold as a<BR/>not-for-profit venture to save the world, I'm not too worried about<BR/>their likely malign influence. Alas, they'll probably turn out to be a<BR/>tarpit for a few young minds. Oh well. They should know better, but<BR/>they don't seem to. They have their myths; they are entering the<BR/>stage of rapid self-delusion. Their worldviews should be completely<BR/>impervious to outside influence in another few years.<BR/><BR/>I for one am not wasting another ounce of effort on investigating any<BR/>of their ideas until reliable third parties with a reputation for<BR/>sangfroid tell me they've done or thought of something interesting. I<BR/>don't consider this outside the realm of possibility, but all the<BR/>signs are bad. I think they're mostly good-hearted kids with a rare<BR/>combination of too much of a love for moral philosophy and too much<BR/>imagination ("look at me, I'm doing groundbreaking science that will<BR/>save the world, because I believe I am!"), and the altogether too<BR/>common combination of too much self-assurance and too little formal<BR/>discipline or training.<BR/><BR/>Give me a highly-disciplined, well-read, methodical, steady amoralist<BR/>any day, when it comes to seeing things clearly."jfehlingerhttp://www.blogger.com/profile/04975754342950063440noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-31207242610113404412007-10-07T10:05:00.000-07:002007-10-07T10:05:00.000-07:00Aleksei, I fear, is growing annoyed with me:I'm no...Aleksei, I fear, is growing annoyed with me:<BR/><BR/><I>I'm not the one making confident predictions here, *you* are. You are confident enough of Brain Emulation being unable to produce a human-equivalent cognitive system, that you label as a Robot Cult those of us who think that such a development is a possibility that can't be comfortably ruled out.</I><BR/><BR/>Nonsense. My point is that you have jumped the gun. You have just made a handful of facile leaps that lead you to think what you call emulation will spit out a Robot God Brain and then, once the leap is made, you think all that is left is to calculate the Robot God Odds as to how many years it will take to get to the Tootsie-Roll Center of the Tootsie-Pop. I'm neither confident nor unconfident about timescales -- I'm just confident that your confidence is flabbergastingly unwarranted.<BR/><BR/>And I'm afraid I simply must call bullshit on your oh-so-reasonable characterization of Singularitarians as "those of us who think that such a development is a possibility that can't be comfortably ruled out," because that characterization would make <B>me</B> a Singularitarian. What actually makes one Singularitarian is clear upon even a cursory survey of the actual published readily available (too bad for you, cultists) discussions which suggest rather forcefully that you take these "possibilities" as near certainties, and certainly as urgencies, while your topical emphases, your policy priorities, your assessments of the concerns of your contemporary peers immediately, obviously, and damningly reveal the truth of the matter.<BR/><BR/><I>I haven't even talked about "consciousness" at all. For all I know, a brain emulation might perform cognitive processing as a non-conscious entity.</I><BR/><BR/>I'm glad to hear it. Take out the entitative dimension of AI, however, and all the risks and powers you're talking about become far too conventional to justify the way Singularitarians keep casting about for monster movie metaphors about a space race between the evil or clueless teams who might create Unfriendly AGI and the heroic Singularitarians who will beat them by creating Friendly AGI first (and I shudder to think what a sociopath will regard as Friendly on this score). Take the entity out, and you've just got recursive malware runaway, something like a big bulldozer on autopilot that you have to stop before it slams into the helpless village or whatnot. None of the Singularitarian handwaving or secret handshakes or SL4, dude! self-congratulation of the sub(cult)ure is much in point anymore.<BR/><BR/>The cult vanishes and you're just talking about software security issues like everybody else. Just like puncturing the Superlativity of the Technological Immortalists leaves you talking about healthcare like everybody else. Just like puncturing the Superlativity of the Nanosantalogists leaves you talking about free software, regulating toxicity at the nanoscale, and widening welfare entitlements just like everybody else. Drop the transcendentalizing, hyperbolzing discourse and suddenly you're in the world here and now with your peers, facing the demands of democratizing ongoing and proximately upcoming technodevelopmental social struggle. <BR/><BR/>Just like I've been saying over and over and over again. You can't be technoprogressive and Superlative at the same time -- but technoprogressive discourse won't feed your ego, won't give you a special identity, won't promise you transcendence, won't bolster your elitism or narcissism, and won't readily facilitate a retro-futural rationalization for the eternal articulation of technodevelopment in the interests of incumbents. That's what I'm talking about. If that doesn't interest you, you are quite simply in the wrong place.<BR/><BR/>You demanded an explanation of why I think you are wrongheaded, but in the technical terms of your own idiosyncratic discourse rather than the perfectly legitimate terms that actually interest me by temperament and training. I replied by pointing out that in my view, "It's far better for you people to explain calmly how exactly you became the sorts of folks who stay up at night worrying about the proximate arrival of Unfriendly Omnipotent Robot Gods given the sorry state of the world (and computer science) at the moment."<BR/><BR/>You replied:<BR/><BR/><I>So your answer is no. You refuse to answer the one question I presented to you.</I><BR/><BR/>Big talk, guy, but you mustn't forget that I'm not a member of your Robot Cult. There aren't enough of you for you to think that you have earned the right to demand that those who disagree with you accept your terms when we want to express our skepticism of your extraordinary claims and curious aspirations. You should consider this a reality check. You need to stop engaging in self-congratulatory circle-jerks with your True Believer friends and struggle to communicate your program in terms the world will understand as they themselves present these terms to you. I cheerfully recommend this because I think the brightest folks among you will likely re-assess their positions once they try to engage in this sort of translation exercise. Those who don't will be that much easier for the likes of me to skewer. If I'm wrong about you, then of course the Singularitarians Will Prevail or whatever -- but that isn't actually something I stay up at night worrying about.<BR/><BR/><I>I'm not changing the subject. I *started* this conversation with a direct question to you, a question you refuse to answer.</I><BR/><BR/>It isn't clear to me that anything you would count as an adequate answer wouldn't already embed me within the very discourse I'm deriding. What on earth is in it for me? I don't want to join in your Robot Cult Reindeer Games. The prospect holds no allure.<BR/><BR/><I>You are the one shunting aside difficulties, preferring to focus on assorted accusations of cultishness.</I><BR/><BR/>Have you ever argued with a longstanding Scientologist? I'm just asking. <BR/><BR/><I>If you are ever able to move past your apparent need to ridicule as "Robot Cultishness" the questioning of some of your assumptions, let me know.</I><BR/><BR/>I enjoy ridiculing the ridiculous, it's exactly what they deserve. It's not an "apparent need" of mine so much as certainly it is a profound pleasure. Feel free to continue to read and comment on my writing whenever you like, as you have been. I enjoy these little talks of ours. As for my sad inability to question my orthodox assumptions in matters of Robot Cultism, it is, no doubt, as you suggest, a sorry and sordid state of affairs for me. It is a hard thing to be so limited as I am. Persevere, earnest Singularitarian footsoldier, and perhaps one day I might see the Light as you have, someday I might hear as keenly as do you the proximate tonalities of the Robot God.Dale Carricohttp://www.blogger.com/profile/02811055279887722298noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-74757344472965369142007-10-07T09:06:00.000-07:002007-10-07T09:06:00.000-07:00Greg: Yes, you are right. One key part of my pro...Greg: Yes, you are right. <BR/><BR/>One key part of my problem with Superlative Discourses -- from a political standpoint -- is that that they facilitate the endless dream deferred, assuming progress is a matter of an indifferent accumulation of technical capacities satisfying ever more wants rather than a matter of social struggle among a diversity of stakeholders who share a world which we can direct to the problem of injustice now quite as easily as in the future, and which we can defer in the future just as easily as so many do now. <BR/><BR/>A second political problem is that I think Superlative discourse deranges sensible deliberation at a time when sensible discourse is desperately needed, by activating the irrational passions of agency (fears of impotence, desires for omnipotence) which usually accompany "technology-talk" (and for the obvious reason that technology is nothing but the prosthetic elaboration of agency). <BR/><BR/>A third political problem is that its tendencies to reductionism facilitate dismissal of diverse actually-existing aspirations with which we must reckon even where we personally disagree with them if we embrace democracy as we must, while its elitism -- technocratic at best, expressive of authoritarian sub(cult)ural and/or fundamententalist True Belief at worst -- provides rationales for neoliberal/neoconsersative corporate-militarist politics of incumbency (when the reductionism takes on the even worse forms of market naturalism and genetic determinism this latter tendency is exacerbated in the extreme).<BR/><BR/>My critique of Superlative Technocentrisms has other dimensions as well -- among them that I think their typical hyperbole, unilateralism, oversimplification, uncritical obliviousness to the work of figurative language in its own discourse (not to mention in the practice of science, and as a key articulator of technoscientific change more generally) limits their ability to facilitate the very outcomes that their partisans would claim define them, a grasp of future technodevelopments, some foresight.<BR/><BR/>Oh, and I think many people who engage in Superlative Discourses are straight-up cultists. That's another dimension of the critique.<BR/>Thanks for the comments! <BR/><BR/>(PS: My teeth are in disastrous condition also, gotta love Amurrica.)Dale Carricohttp://www.blogger.com/profile/02811055279887722298noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-38095534706087213732007-10-07T03:32:00.000-07:002007-10-07T03:32:00.000-07:00Might as well post here my response to Dale...> As...Might as well post here my response to Dale...<BR/><BR/>> As I have said before I am a materialist about mind and a pluralist about<BR/>> intelligence so I am not so easy to dismiss as you might like, no doubt,<BR/>> when I tell you that you people simply don't know what you are talking<BR/>> about when you talk about the "entity" who would presumably exhibit your<BR/>> post-biological intelligence, when you talk about the "intelligence" that<BR/>> would be "super"iorized, when you invest "emulation" with the significance<BR/>> you need, when you glibly presume inter-translatability between modes of<BR/>> materially incarnated "consciousness," which you already reduce<BR/>> problematically a dozen ways to Sunday, when you assume what you take to<BR/>> be an "objective" perspective on consciousness, usually to the denigration<BR/>> of "subjective" perspectives, and heaven only knows how you would ever<BR/>> cope with "inter-subjective" dimensions of consciousness if you were every<BR/>> bothered to take the performative dimension of material "capacities"<BR/>> seriously, when you make pronouncements about "friendliness" and<BR/>> "unfriendliness," and so on.<BR/>><BR/>> All the equations and confident predictions in the world won't paper over<BR/>> the conceptual incoherence of your assumptions and your aspirations as<BR/>> they tend to play out in your discourse<BR/><BR/>I'm not the one making confident predictions here, *you* are. You are<BR/>confident enough of Brain Emulation being unable to produce a<BR/>human-equivalent cognitive system, that you label as a Robot Cult<BR/>those of us who think that such a development is a possibility that<BR/>can't be comfortably ruled out.<BR/><BR/>You make long lists of accusations, quite beside anything I've said. I<BR/>haven't even talked about "consciousness" at all. For all I know, a<BR/>brain emulation might perform cognitive processing as a non-conscious<BR/>entity. (But no, I'm not assuming that either.)<BR/><BR/>>> Could you elaborate on how you are able to convince yourself that e.g.<BR/>>> brain emulation can't possibly in a meaningful timeframe produce<BR/>>> computer programs of human-equivalent (and shortly thereafter,<BR/>>> human-surpassing) intelligence?<BR/>><BR/>> It's far better for you people to explain calmly how exactly you became<BR/>> the sorts of folks who stay up at night worrying about the proximate<BR/>> arrival of Unfriendly Omnipotent Robot Gods given the sorry state of the<BR/>> world (and computer science) at the moment.<BR/><BR/>So your answer is no. You refuse to answer the one question I presented to you.<BR/><BR/>> I think that the typical insistence by Singularitarians that "serious" critics of<BR/>> their curious and marginal preoccupations must address themselves to<BR/>> technical questions that shunt aside all such difficulties and focus instead on<BR/>> number crunching the Robot God Odds as though we all know what a Robot<BR/>> God would consist of in the relevant sense is a completely self-serving<BR/>> changing of the subject any time anybody comes near to grasping the arrant<BR/>> foolishness at the heart of the whole enterprise.<BR/><BR/>I'm not changing the subject. I *started* this conversation with a<BR/>direct question to you, a question you refuse to answer. You are the<BR/>one shunting aside difficulties, preferring to focus on assorted<BR/>accusations of cultishness.<BR/><BR/>>> I doubt that you can present an argument for the infeasibility of<BR/>>> brain emulation (within 50 years or so) that a responsible person<BR/>>> could accept.<BR/>><BR/>> "A responsible person" meaning, one expects, a current member or likely<BR/>> candidate to join the Robot Cult of "your kind"? You're quite right, I<BR/>> doubt I can present an argument that would dissuade True Believers from<BR/>> their faith, given the psychic work that faith is likely to be doing for<BR/>> them.<BR/><BR/>No, that is not what "a responsible person" means.<BR/><BR/>If you are ever able to move past your apparent need to ridicule as<BR/>"Robot Cultishness" the questioning of some of your assumptions, let<BR/>me know.Alekseihttp://www.blogger.com/profile/07833774182319326062noreply@blogger.comtag:blogger.com,1999:blog-5956838.post-77097349757556769752007-10-06T16:08:00.000-07:002007-10-06T16:08:00.000-07:00I've watched the ongoing debate between Dale and v...I've watched the ongoing debate between Dale and various "singularitarians"<BR/> for a while and thought I would finally comment.<BR/>I admit that I often find Dale's writing a bit too heavy on continental <BR/>philosophical jargon for my tastes but I actually think I finally get<BR/> what some of he's talking about. The way I understood this discourse<BR/>was to finally connect some of Dale's abstractions with a particular,<BR/> concrete problem of mine. The problem is that I have cavities. OK,<BR/>stop laughing. I'm serious, I've got some cavities in my molars.<BR/>They've been there for a while now and aren't growing very fast but at<BR/>some point they will need to be fixed. I have these cavities because<BR/>I haven't been to the dentist in years and I haven't been to the<BR/>dentist because I don't have insurance and good teeth are useless if<BR/>you can't afford food to chew with them. As you have probably guessed by this<BR/>paragraph, I live in that benighted land known as the USA.<BR/><BR/>What does this have to do with the Singularity, nanoSanta (or is it<BR/>nanoSatan?), superhuman AI or any of the other stuff talked about<BR/>here? A lot actually. What Dale is getting at (my interpretation) in<BR/>a lot of his attacks on the singularity is not so much that there's<BR/>anything inherently wrong with nanotechnology or the expectation that<BR/>it will in some sense make the world "richer" but the fact that the<BR/>political dimensions of <I>existing</I> scarcity are just hand-waved away<BR/>by most singularitarians and a facile assumption is made that somehow<BR/>new technologies will moot any need for social struggle.<BR/><BR/>I understand this perfectly when I think of my teeth. There's no<BR/><I>technical</I> justification for the state of my teeth. It's not like no<BR/>one has yet invented Novocaine, high speed drills and ultra-fine<BR/>needles. There's no need to invoke nanoSanta to fix my damned teeth.<BR/>Industrial Santa solved all these problems long ago. The real problem<BR/>is that specific <I>political</I> decisions have been made in the<BR/>particular geopolitical abstraction known as the USA that have<BR/>resulted in my inability to secure even the most basic health care, a<BR/>problem I would simply not have in other geopolitical abstractions<BR/>like the UK, Canada or pretty much any other developed country with a<BR/>national health program. The US, for specific <I>political</I>,<BR/><I>ideological</I> reasons has chosen to eschew such systems for two basic<BR/>reasons - it "needs" the money that universal health care would cost<BR/>to pursue militarist and imperialist policies all over the face of the<BR/>globe, and its bizarre political culture glorifies the "free market"<BR/>in everything even when it clearly isn't working.<BR/><BR/>The real problem: The real problem that I see in all this is, just as<BR/>Dale has been telling you, basically political and will need to be<BR/>addressed on that level. I know it's tempting, as a geek myself, to<BR/>think that politics, like some big thug who has been taking your lunch<BR/>money can just be outmaneuvered by some kind of technical Jujitsu<BR/>where his own strength and size is used against him but the fact is<BR/>that such fantasies have been promoted before and have always failed<BR/>to play out. Indeed it might help here to read some of what 19th and<BR/>early 20th century geek-utopians wrote similar eschatological paeans to<BR/>the then new technologies of steam and electricity. Oh, there was<BR/>progress yes, emphatically yes! The work week got shorter, factories<BR/>got safer, public health improved and a lot of this would have been<BR/>impossible or very difficult without advanced technology. But there<BR/>was always a struggle and every inch of this ground had to be taken<BR/>from the ruling class, often by force or threat of force.<BR/><BR/>So what of nanoSanta? Won't he make us all rich and healthy with<BR/>almost no effort? Do I doubt the feasibility of nanoassemblers or AI?<BR/>Actually, I'm rather less skeptical about this stuff that Dale seems<BR/>to be. I'm no anti-Drexler nut screaming about how "it'll all turn<BR/>into glass, Glass, I tells ya!". The problem is that the whole world<BR/>(and particularly the part called the USA) is basically organized at<BR/>this point like a kind of sleazy mobbed-up New Jersey neighborhood.<BR/>Basically, nothing can go on here without the approval of The Don.<BR/>I'll spell it out for you, the Don is the investor class, the CEOs and<BR/>corporate board members, the rich. So let's say nanoSanta shows up in<BR/>this bad neighborhood one day. How long is he going to last? Not<BR/>long. Like Industrial Santa before him he's going to be approached by<BR/>the Don or some of his henchmen and either bought off or, if he tries<BR/>to make some moral stand, smacked around until he complies. This is<BR/>easy to understand. The Don can't just have some punk-ass upstart<BR/>giving out freebies on his turf. If that goes on long enough people<BR/>will start to feel like maybe they don't need The Don anymore. They<BR/>might start to see The Don for what he is, a rank parasite and thug<BR/>who makes their lives a lot more miserable than the mere physical<BR/>limits of things would demand.<BR/><BR/>You can already see this process at work. NanoSanta has already met<BR/>with The Don's capos. Much "nanotech" work is already disappearing<BR/>into the military industrial "classified" black hole, to re-emerge<BR/>later as horrific super-weapons to further prop up the status quo, not<BR/>immortality medicine or free toys for all. A few scientists and<BR/>engineers will take a stand and won't be bought off. I've known a few<BR/>who have in fact. In the current system though, they are doomed. But<BR/>that's the point - "in the current system". It's up to all of us to<BR/>throw that system on the trash heap of history as fast as possible,<BR/>not simply so that we can get into nanoSanta's toy bag but just so<BR/>that we can live at all. This is the only part of the Singularitarian<BR/>eschatology that I <I>do</I> accept. There's absolutely EVERYTHING at<BR/>stake here. Not just our own individual lives but the species itself.Greg in Portlandnoreply@blogger.com