Monday, October 27, 2014

Robocultic Kack Fight

Although Musk is diverting plenty of money and attention to things robocultic, "Jesuscopter Crudslinger" declares Musk to be tantamount to the Taliban in the transhumanoid tabloid h+ magazine because Musk fails to love the Robot God with his whole heart. "[M]ind uploading and cyborgization seem almost inevitable,
once you ponder them in a rational and open-minded way," insists rational and open-minded Jesuscopter Crudslinger. (It would appear that Mr. Crudslinger is none other than Ben Goertzel, about whom I've written before in Amor Mundi fan fave piece Nauru Needs Futurologists! and also in the self-explanatorily titled Robot Cultist Declares Need for Holiday Counting Chickens Before They Are Hatched.) "And the odds of AI systems vastly exceeding human beings in general intelligence and overall capability, seem very close to 100%." That "seem" is a nice touch, I must say. Very moderate! Very Serious.

'slinger goes on to confide, "Elon" -- with whom Jesuscopter Crudslinger is on a first name basis, naturally -- "I’m sorry if you find AGI, mind uploading and cyborgization demonic. But they’re going to happen anyway, no matter what you, MIRI, the Taliban or the Amish think about it. And no, humanity won’t be able to 'control' it, any more than we have been able to control computers [or] the Internet." Crudslinger says he is sorry, but I doubt it. He doesn't sound sorry. This is, I'm afraid, one of many doubts I am having. I find I'm reminded a little bit of...

These things are going to happen! That is to say, not only is it not implausible for you to expect Robot Gods to End History and for you to scan your "info-soul" from your cryo-hamburgerized brain to "upload" it as a cyber-angel that will live forever in Holodeck Heaven, why, all this is obvious! inevitable! unstoppable! To control this irresistible tide to techno-transcendence is so laughable that we must put the very word in scare-quotes, why, control isn't even a real word when you ponder it in a rational and open-minded way!

I come from cyberspace home of Mind... the changes in accelerating change are accelerating... the disrupters are disrupting the looming wall of finitude... no death! no taxes! no girl cooties in the clubhouse... the futurological faithful are achieving escape velocity... the stale, pale, males of the Robot Cult hail the Predator Gods of techno-capital.... they are buying The Future one gizmo at a time... the toypile will reach to infinity and beyond...

> It would appear that Mr. Crudslinger is none other than Ben Goertzel,> about whom I've written before in Amor Mundi fan fave. . .> Robot Cultist Declares Need for Holiday Counting Chickens Before They Are Hatched( http://amormundi.blogspot.com/2012/02/robot-cultist-declares-need-for-holiday.html )

https://www.singularityweblog.com/do-we-need-to-have-a-future-day/------------------Do We Need to Have a “Future Day”?by Nikki OlsonSeptember 28, 2011

. . .

“In thinking about how to get people interested in and excitedabout Transhumanist ideas explicitly, one idea I thought aboutwas to create a holiday for the future. . ." . . .

The remarks above were made by Ben Goertzel during the questionand answer period of last week’s H+ Leadership Summit. . .

Back in 2004, one Michael Wilson had materialized as an insiderin SIAI [the "Singularity Institute for Artificial Intelligence",now called MIRI, the "Machine Intelligence Research Institute"]circles. And during the same era he was postingrather frequently on the S[hock]L[level]4 mailing list[an Eliezer Yudkowsky-owned forum]. At one point,he made a post in which he castigated himself (andthis didn't seem tongue-in-cheek to me in the context, thoughin most contexts such claims would clearly be so) forhaving "almost destroyed the world last Christmas" as aresult of his own attempts to "code an AI", but now that hehad seen the light (as a result of SIAI's propaganda) hewould certainly be more cautious in the future. (Of course, noone on the list seemed to find his remarks particularlyoutrageous -- he was more-or-less right in tunewith the Zeitgeist there). He also wrote:

"To my knowledge Eliezer Yudkowsky is the only person that has tackled these issues head on and actually made progress in producing engineering solutions (I've done some very limited original work on low-level Friendliness structure). Note that Friendliness is a class of advanced cognitive engineering; not science, not philosophy. We still don't know that these problems are actually solvable, but recent progress has been encouraging and we literally have nothing to lose by trying.I sincerely hope that we can solve these problems, stop Ben Goertzeland his army of evil clones (I mean emergence-advocating AI researchers :) andengineer the apotheosis. The universe doesn't care about hope though, so I will spend the rest of my life doing everything I can to make Friendly AI a reality. Once you /see/, once you have even an inkling of understanding the issues involved, you realise that one way or another these are the Final Days of the human era and if you want yourself or anything else you care about to survive you'd better get off your ass and start helping. The only escapes from the inexorable logic of the Singularity are death, insanity and transcendence."("Phase Changes in the Evolution of Complexity"http://www.sl4.org/archive//0404/8401.htmlhttp://sl4.org/wiki/Starglider )

1. The Yudkowskian Singularitarian Party will actuallymorph into a bastion of anti-technology. The approachesto AI that -- IMHnon-expertO and in other folks'no-soHrather-more-expertO -- are likeliest to succeed(evolutionary, selectionist, emergent) are franticallydemonized as too dangerous to pursue. The most**plausible** approaches to AI are to be regulatedthe way plutonium and anthrax are regulated today, orat least shouted down among politically-correctSingularitarians. IOW, the Yudkowskian Party arrogatesto itself a role as a sort of proto-Turing Police outof William Gibson. Move over, Bill Joy! It's veryVingean too, for that matter -- sounds like the first bookin the "Realtime" trilogy (_The Peace War_).

2. The **approved** approach to AI -- a Yudkowsky-sanctioned"guaranteed Friendly", "socially responsible" framework(that seems to be based, in so far as it's coherent at all,on a Good-Old-Fashioned mechanistic AI faith in"goals" -- as if we were programming an expert systemin OPS5), which some (more sophisticated?) folks have alreadygiven up on as a dead end and waste of time, is to suck up allof the money and brainpower that the SL4 "attractor" canpull in -- for the sake of the human race's safenegotiation of the Singularity.

3. Inevitably, there will be heretics and schisms in theChurch of the Singularity. The Pope of Friendliness willnot yield his throne willingly, and the emergence of someone(Michael Wilson?) bright enough and crazy enoughto become a plausible successor will **undoubtedly**result in quarrels over the technical fine points ofFriendliness that will escalate into religious wars.

4. In the **absolute worst case** scenario I can imagine,a genuine lunatic FAI-ite will take up the Unabomber'stactics, sending packages like the one David Gelerntergot in the mail to folks deemed "dangerous" accordingto (lack of) adherence to the principles and politics of FAI(whatever they happen to be according to the reigningPope of the moment).====

Now here's a genu-wine existential risk -- the propensity offolks to fall for self-styled Messiahs:http://justnotsaid.blogspot.com/2014/10/sociopath-alert-john-roger-hinkins.html

http://futurisms.thenewatlantis.com/2014/10/our-new-book-on-transhumanism-eclipse.html-------------------Wednesday, October 29, 2014Our new book on transhumanism: Eclipse of Man

Since we launched The New Atlantis, questions about human enhancement,artificial intelligence, and the future of humanity have been a corepart of our work. And no one has written more intelligently andperceptively about the moral and political aspects of these questionsthan Charles T. Rubin. . . one of our colleagues here on Futurisms.

So we are delighted to have just published Charlie's new book abouttranshumanism, Eclipse of Man: Human Extinction and the Meaning of Progress. . .====

http://www.thenewatlantis.com/publications/eclipse-of-man-------------------Human Extinction and the Meaning of ProgressCharles T. Rubin

Tomorrow has never looked better. Breakthroughs in fields likegenetic engineering and nanotechnology promise to give usunprecedented power to redesign our bodies and our world. Futuristsand activists tell us that we are drawing ever closer to a daywhen we will be as smart as computers. . .====

There's a lot to like at the Futurisms site -- but it bugs me that they seem too often to accept futurology as actually predictive of a future they would abhor rather than as a symptom of a reactionary take on the present, they accept too many of the same terms the futurologists do. Since they indulge a bit of their own reactionary politics on questions of choice and harm reduction policy models more generally, I think there is weirdly a bit of an alignment in assumptions about the futurological belied by their differences in assessing outcomes once these terms are conceded. Again, I think there is a lot of useful and incisive critique there at the Futurisms site. I get a lot out of reading it. But when it comes right down to it, their critical vantage is valuable but really doesn't seem quite the same as mine.

eClips sounds like it would probably be very enhancing for dynamic hair management. Somebody contact Natasha Vita-More!