Pages

Tuesday, 20 October 2015

The explosive increase in processing
power and data, fueled by powerful machine learning algorithms, finally
empowers silicium-based intelligence to overtake carbon-based
intelligence. Intelligent machines don't need to be programmed anymore,
they can learn and evolve by themselves, at a speed much faster than
human intelligence progresses.

Humans weren't very good at accepting that the Earth was not the
center of the universe, and they still have difficulties accepting that
they are the result of chance and selection, as evolutionary theory
teaches us. Now, we are about to lose the position of the most
intelligent species on Earth. Are people ready for this? How will this
change the role of humans, our economy, and our society?

It would be nice to have machines that think for us, machines that do
the boring paper work and other tasks that we don't like. It might also
be great to have machines that know us well: that know what we think
and how we feel. Will machines be better friends?

But who will be responsible for what intelligent machines decide and
do? Can we control them? Can we tell them what to do, and how to do it?
Humans have learned to ride horses and elephants. But will they be able
to control 10 times more intelligent machines? Would we enslave them or
would they enslave us? Could we really pull the plug, when machines
start to emancipate themselves?

If we can't control intelligent machines on the long run, can we at
least build them to act morally? I believe, machines that think will
eventually follow ethical principles. However, it might be bad if humans
determined them. If they acted according to our principles of
self-regarding optimization, we could not overcome crime, conflict,
crises, and war. So, if we want such "diseases of today's society" to be
healed, it might be better if we let machines evolve their own,
superior ethics.

Intelligent machines would probably learn that it is good to network
and cooperate, to decide in other-regarding ways, and to pay attention
to systemic outcomes. They would soon learn that diversity is important
for innovation, systemic resilience, and collective intelligence. Humans
would become nodes in a global network of intelligences and a huge
ecosystem of ideas.

In fact, we will have to learn it's ideas that matter, not genes.
Ideas can "run" on different hardware architectures. It does not really
matter whether it's humans who produce and spread them or machines, or
both. What matters is that beneficial ideas spread and others don't get
much impact. It's tremendously important to figure out, how to organize
our information systems to get there. If we manage this, then, humans
will enter the history book as the first species that figured it out.
Otherwise, do we really deserve to be remembered?

FuturICT Hubs

Followers

FET Flagship Initiative

The activities leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 284709 - project 'FuturICT', a Coordination and Support Action in the Information and Communication Technologies activity area