Month: December 2018

Business models need to evolve and change to meet new demand. During certain phases changes are incremental but at times a step change is required. Missing such moment in time may well mean the end of the business or the company.

Many organizations have difficulties to see the need for a step change. It is easier to focus on daily problems and find reasons for increasing difficulties in being successful. To step back and trigger fundamental changes is hard, requires guts and in some ways against human nature. People fear to lose something now and rather defer (eg reasons to not do) the opportunities to win in the future. Big change feels risky, staying and repeating what was successful before seems safer.

The caterpillar in the picture above had a great life so far. But in order to move on it is required to change the form and learn new capabilities. This is not about becoming a bigger or faster caterpillar or to add a few cool extensions – it is about fundamentally changing the nature and form. It requires fundamental changes to the way success is measured. As a butterfly would for not score high on the caterpillar KPIs – the opposite is also true.

For banks, this means to leave the world of brick and mortar ages and the management structures introduced by Taylor. New banking business models need to take advantage of the internet, create banking products which are digital in their core and embedded into the network. This does not automatically mean that everything becomes self-service – it means that humans involved need to add value and apply their authorised special abilities like creativity or empathy. It is the machines which follow predefined instructions and learns from patterns – humans complement this by finding creative solutions to problems based on empathy and emotions.

You’ve probably heard it a million times, but there is some wisdom in being careful what you wish for. While we may be striving to attain superintelligence, how can we ensure that the technology doesn’t misunderstand its purpose and cause unspeakable devastation?

The key to this problem lies in programing the motivation for SI to accomplish its various human-given goals. Say we designed an SI to make paper clips; it seems benign, but what’s to prevent the machine from taking its task to an extreme and sucking up all the world’s resources to manufacture a mountain of office supplies?

This is tricky, because while AI is only motivated to achieve the goal for which it has been programmed, an SI would likely go beyond its programmed objectives in ways that our inferior minds couldn’t predict.

But there are solutions to this problem. For instance, superintelligence, whether it be AI or…

How many movies, cartoons and sci-fi series have you seen featuring some kind of superintelligent robotic race? Probably quite a few. In some films, such as Terminator, they come to conquer the world; in others, they help us out; and in some, like Wall-E, they’re simply adorable. Of course, these robots are fictional, but will they always be? Will the future bring superintelligent AI? If it does, what will they look like and when will they appear?

In Superintelligence: Paths, Dangers, Strategies by Nick Bostrom, we learn about the journey toward AI so far – where we might be going; the moral issues and safety concerns we need to address; and the best ways to reach the goal of creating a machine that’ll outsmart all others.

What fundamentally sets us apart from the beasts of the field? Well, the main difference between human beings and animals is our capacity for abstract…