Brad Allenby’s post has the merit of using the adjective “evolved” intelligence (EI) to describe human intelligence and I quote his words “which means it reflects the compromises and shortcuts necessary for a low bandwidth ape brain to function in an incredibly complex world.” Then the author explains this type of intelligence is just “one” type of intelligence and clarifies that any AI will develop differently from the EI. It will be difficult, if not impossible, to limit it by regulation or national legislation. The continuous effort to apply anthropocentric concepts of cognition to AI would be misleading. In particular this behavior would bring us to transfer our fears into AI. Humans fear that artificial intelligence will act just like the humans act (so with all our deficiencies); only it will have more capacity. The conclusion is that EI and AI are different and we should focus on the differences rather than the similarities.

I agree that EI is the product of evolution and AI will be the product of programming or self-programming, but they will have something in common: they both are used to chase a goal. The goals for humans were survival and reproduction. The targets for artificial intelligences will be what they have been programmed for. It’s important to compare human and artificial intelligence, because an AI will pursue a goal in the same way humans do: through conflict or cooperation. Also our “sociality” is a product of this need of cooperation (in a group the ape lived longer and better). We cannot then exclude that separate AIs developed for different purposes might communicate and help each other, if their algorithms can be of reciprocal benefit. The topic of how to give rules to an artificial intelligence is more important than ever and that’s why the discussion on the ethic of robots is so relevant. I started to address this issue in a post titled “Heartificial or artificial intelligence? How to program a friendly AI“, if you’re curious to know more about it. We have many years in front of us to find how to build a non-dangerous AI, so it’s not true that it won’t be limited by regulations or national legislations.

My conclusion is that human intelligence is flawed, not really powerful, just a product of evolutionary tricks and adaptations, but it’s really the only type of intelligence we can compare to. We cannot reprogram the EI, but if we understand its logic of functioning, we can remove some elements (like conflict and other ugliness…) and have a safe AI as a tool for humanity.

Newsletter: because there’s more than artificial intelligence here!

The Futurist Hub Newsletter is the greatest thing after the Big Bang. Once per month, only the news, free of spam. And with a free ebook as a bonus.