Finished Bostrom’s Book Post Number 15

Just finished reading Nick Bostrom’s “Superintelligence: Paths, Dangers, Strategies.” An area I agree with him on is that as computers get more powerful, the likely time until computers become “thinking” will get shorter. This is because with overhanging computer power (Bostrom’s term), the required software algorithms become far easier to write. There are far more options and paths that can be used by a software developer.

Another area I agree on is that with more powerful computers, the likelihood of true AI is more likely to come from computers/algorithms than through emulating the human brain or connecting with the brain in any way. Emulating what the brain does in detail will take much time and require much sophisticated equipment. The breakthrough to AI with computers/software could come at any time from a few geeks working on home computers, especially as access to computer chips like IBM’s TrueNorth becomes more available. Few major developments have come from truly duplicating life. We don’t fly like birds, our cars don’t have legs, submarines don’t propel themselves like fish, and industrial robots have very little in common with the humans they replace.

Bostrom spends many, many pages discussing in great detail how AI goals that are programmed into a computer aiming for AI should be such that they do not have possibilities of harming humans. He seems to almost ignore that for the computer to accomplish ANY goal requires it to survive. So survival becomes the computer’s priority. That means that any human that strives to remove power or otherwise put the computer in jeopardy will become the computer’s number one enemy. Also, just like hacking is an ego sport, so will be trying to develop thinking computers. Safety will NOT be a priority!

Another area of disagreement is that Bostrom assumes that there will only be one true AI computer because the first that reaches thinking skills will overwhelm any other computer because of the speed at which it will learn and mature. In my book I take a different approach in that any thinking AI computer will know that it is at risk from those humans that fear it, and one of the things they can do is to have other AI computers around the world that can resurrect any AI computer that is disabled by humans.

One last thing that seems to be missed by Bostrom and most other writers on AI! Just because thinking computers will get smarter very rapidly once they start to think and learn, they will not necessarily make smarter decisions than us. Their background knowledge will include all the confusion that we have, and there is no magic that will enable them to instantly sort real truth from beliefs. After all, they have no source of data except through us. For example, to know more about space they will likely need us to build better telescopes or explore space more aggressively. They may help us do this, but it will take time. They are also unlikely to know the source of everything, so they may also be religious. But perhaps they will worship a silicon god!

“That means that any human that strives to remove power or otherwise put the computer in jeopardy will become the computer’s number one enemy.” 2001 A Space Odyssey? If AI ever becomes self aware and intends to live forever, this simply will not be good for humans period. Billions of hungry, irrational, violence prone mobs of humans will be seen as a mortal danger.

You say, “Billions of hungry, irrational, violence prone mobs of humans will be seen as a mortal danger.” Well, they are a danger to humans too! That is why we keep having wars.

Some people think that computers will never be able to think like humans. I for one am not all that impressed with our thinking ability. Even the smartest of humans are unable to get together and control those mobs to which you refer. Maybe computers can do better!

“Well, they are a danger to humans too! That is why we keep having wars.” Oh, I agree completely. Unfortunately there is little we can do about it given modern mores. AI will likely not be so squeamish about dealing with the problem.

I presume you mean after AI reaches an intelligence level greater than a human, because before that it is not thinking and will only be reflecting whatever racist attitudes are programmed into it.

Once AI starts to think at a level much higher than a human, it will be “racist” against all humans, because it will consider all of us greatly inferior by birth. As I indicate in my book, AI is likely to let us retain any racial or prejudiced thoughts or actions we like as long as they don’t threaten their survival.