Of course, some argue that we have better things to worry about. Climate Change, Wars, Sustainability etc, but Superhuman, might finally create a joke so funny that everyone on Earth dies laughing.

The fact is that there are a lot of unquestioned assumptions when it comes to AI, learning in ways far beyond that of humans.

We can’t really predict what might happen next because superintelligent A.I. may not just think faster than humans, but in ways that are completely different. It may have motivations — feelings, even — that we cannot fathom. It could rapidly solve the problems of aging, of human conflict, of space travel. We might see a dawning utopia.

There is one thing for sure it has been set in motion and now we are waiting for the results.

In the meantime, there is something unpleasant about A.I. and that is Humanity is already losing control of artificial intelligence which could have catastrophic consequences for civilization.

For example “Deep learning”, is a powerful tool for solving problems. It helps us tag our friends on Facebook, provides assistance on our smartphones using Siri, Cortana or Google.

Deep learning has helped computers get better at recognizing objects than a person.

The military is pouring millions into the technology so it can be used to steer ships, control drones and destroy targets.

And there’s hope it will be able to diagnose deadly diseases, make traders billionaires by reading the stock market and totally transform the world we live in.

Indeed whether you call it Deep learning, it could be the last invention that humanity will ever need to make.

If they can’t figure out how the algorithms (the formulas which keep computers performing the tasks we ask them to do) work, they won’t be able to predict when they fail.

The Question is are we going to rely on a Google AI ethics board or we agree on a World Technological Strong Room where all AI programs are stored and available to everyone.