Share this:

If Strossâ€™ objections turn out to be a problem in AI development, the â€œworkaroundâ€ is to create generally intelligent AI that doesnâ€™t depend on primate embodiment or adaptations. Couldnâ€™t the above argument also be used to argue that Deep Blue could never play human-level chess, or that Watson could never do human-level Jeopardy?

But Anissmovâ€™s first point here is just magical thinking. At the present time, a lot of the ways that human beings think is simply unknown. To argue that we can simply â€œworkaroundâ€ the issue misses the underlying point that we canâ€™t yet quantify the difference between human intelligence and machine intelligence. Indeed, itâ€™s become pretty clear that even human thinking and animal thinking is quite different. For example, itâ€™s clear that apes, octopii, dolphins and even parrots are, to certain degrees quite intelligent and capable of using logical reasoning to solve problems. But their intelligence is sharply different than that of humans. And I donâ€™t mean on a different level â€” I mean actually different. …

Share this:

super-intelligent AI is unlikely because, if you pursue Vernor’s program, you get there incrementally by way of human-equivalent AI, and human-equivalent AI is unlikely. The reason it’s unlikely is that human intelligence is an emergent phenomenon of human physiology, and it only survived the filtering effect of evolution by enhancing human survival fitness in some way. Enhancements to primate evolutionary fitness are not much use to a machine, or to people who want to extract useful payback (in the shape of work) from a machine they spent lots of time and effort developing. We may want machines that can recognize and respond to our motivations and needs, but we’re likely to leave out the annoying bits, like needing to sleep for roughly 30% of the time, being lazy or emotionally unstable, and having motivations of its own.

“Human-equivalent AI is unlikely” is a ridiculous comment. Human level AI is extremely likely by 2060, if ever. (I’ll explain why in the next post.) Stross might not understand that …

Share this:

Reducing the probability of human extinction is more important than everything else, because humans are the only known source of “intelligence”, “creativity”, “values”, and if we die, the universe is boring. No one in the future will care that you saw a funny movie. They will care if you helped Earth-originating intelligent life survive its self-destructive adolescent phase.

For those who wish to make their lives actually mean something, there’s the existential risk reduction career network:

Interested in donating to existential risk reduction efforts? Would you like to exchange career information with like-minded others? Then you should consider the Existential Risk Reduction Career Network! (“X Risk Network” for those short on time.) From the front page of the website:

“This network is for anyone interested in donating substantial amounts (relative to income) to non-profit organizations focused on the reduction of existential risk, such as SIAI, FHI, and the Lifeboat Foundation. […] We are …

Share this:

My article on how to pitch articles to H+ magazine has been slightly improved and is now posted on H+ magazine.

Topics to inspire you:

How can the transhumanist philosophy be applied to daily life? Quantified Self topics Is change actually accelerating? If so, what is the evidence? What technologies pose major risks and why? What are the next steps for robotics and AI? What is happening in genomics? What is the future of energy? Is culture getting friendlier to the future? What will the year 2020 be like? What will the year 2030 be like? What will the year 2050 by like? What will the year 2100 be like? Book reviews (Robopocalypse) Movie reviews (Limitless) Conference/event reviews Cool new businesses and initiatives in the transhumanist space Philosophical issues Other cultural commentary Space, space stations, spaceships, satellites, planetary colonization Topics similar to content in Scientific American and Popular Mechanics

Share this:

I’m the new Managing Editor at H+ magazine, which in practical terms means I need to come up with five good articles a week to publish. The magazine gets a lot of traffic so it’s a good place to share information with other transhumanists.

1. Come up with an idea or coverage of a company/product/news story worth covering. Ideally you have had personal experience with the company/product/news story and are uniquely suited to write about it. If not, you should be ready to quote someone who has.

2. Send the pitch to editor@hplusmagazine.com. That goes into my inbox. Include links to samples of your other writing. (If you want to write articles for H+ magazine but haven’t written serious blog posts yet, you might want to try that first.)

3. If you get the go-ahead, investigate the story, get a quote from an expert in the area you’re writing about. Take notes. The article should primarily be reporting, not speculation or personal opinion. Editorials are welcome but harder to write than straightforward …

Share this:

Humanity+, which used to be known more descriptively (but less concisely and media-friendly) as the World Transhumanist Association, is running a fundraiser this summer:

Thanks to a generous matching grant by the Life Extension Foundation and other major donors, if we raise $15,000 independently, we will secure a total of $30,000 in funding for Humanity+ this summer, enabling the organization to shift into a higher gear. Any gift you make to Humanity+ will be matched dollar-for-dollar until July 31st.

Share this:

Thanks to everyone who is participating in the transhumanist collaborative map project, after just six days we have almost 100 pins on the map and over 20,000 views. I see that many people in the Bay Area and New York are being shy and not adding themselves…

Be sure to pass the link around to your friends who are transhumanists, so we can build a better picture of the movement worldwide. This is a very unique and foresightful group! We should learn a little more about one another.

Share this:

Apple co-founder Steve Wozniak has seen so much stunning technological advances that he believes a day will come when computers and humans become virtually equal but with machines having a slight advantage on intelligence.

Speaking at a business summit held at the Gold Coast on Friday, the once co-equal of Steve Jobs in Apple Computers told his Australian audience that the world is nearing the likelihood that computer brains will equal the cerebral prowess of humans.

When that time comes, Wozniak said that humans will generally withdraw into a life where they will be pampered into a system almost perfected by machines, serving their whims and effectively reducing the average men and women into human pets.

Widely regarded as one of the innovators of personal computing with his works on putting together the initial hardware offerings of Apple, Wozniak declared to his audience that “we’re already creating the superior beings, I think we lost the battle to the machines long ago.”

I always think of this guy when I go by Woz Way …

Share this:

Updating this map is a little tricky, you have to be invited as a collaborator by someone who already is one. If you know someone already on the map you can ask them for an invite, otherwise you have to fill in your email address in form below. Then you can also invite anyone else to collaborate, you just need their email address. I promise I won’t sell it to spammers, this list is only for adding people to the map.