Saturday, October 08, 2016

AI is moving fast, scoring one victory after another in
specific domains. However, the main problem AI has is in moving from one domain
to another. It may be great at playing chess or GO, or other rules based games,
but when it comes to other simple but different problems, it is not flexible.
This problem, getting AI to be more general in terms of its skills or learn to
apply what it learned in one domain to another, is a serious limitation, perhaps
the greatest limitation of current AI.

Human-all-too-human

We humans have a different but no less debilitating problem
– our brains. We literally have to spend up to 20 years or more in classrooms
being painstakingly taught by other humans to acquire knowledge and skills.
Even then, it’s only a start on the long road that is lifelong learning. That’s
because we cannot efficiently transfer knowledge and skills directly from one
brain to another. It cannot be uploaded and downloaded. In addition to this
limitation, we forget most of what we are taught, sleep 8 hours a day, are
largely inattentive for much of the remaining 16 hours, get ill and die.
Artificial intelligence has none of these constraints.

Cloud robotics

One solution to the learning problem in AI, now being
practised in robotics, is ‘cloud robotics’, where one robot can literally
‘teach’ another. By teach, I mean, pass and share its acquired skills on to
other robots. This is a bit mind blowing, as it is something we cannot do as
humans.

Google and others have been experimenting with cloud
robotics for some years, where robots learn how to do something, through neural
networks and reinforcement learning (trial and error) and once they have
acquired that skill, it can be uploaded to as many other robots as you want. They
literally share experiences and therefore learning. Not only do the sets of
robot learn quickly, they instantly share that learning with other networked
robots. This whole idea of AI learning from collective or shared experience is
fascinating.

Google have been research three different (but not mutually
exclusive) techniques for teaching or training multiple robots collectively, by
allowing them to share experiences and learn general purpose skills:

1. learning
motion skills directly from experience

2. learning
internal models of physics

3. learning
skills with human assistance

Shared
experience, in all three of these forms, clearly takes less time than a single
robot acquiring its own experiences. But it’s not only time that shrinks, you
also get the benefit of variation in those experiences, more diversity of
experience. Deeper learning, in terms of both quantity and quality, takes place
that can cope with more complex environments and problems. This sharing of
experience, whether trial and error tasks, models that are built or human data
that is used to train robots can be networked, shared and seen as a form of
pooled, collective intelligence.

Why is this frightening? Robots are already decimating the manufacturing sector. The possibility that generalpurpose robots will be more flexible and able to do all sorts of tasks as they learn from each other, is frightening in the sense that this takes them beynd the specifics of manufacturing.

Collective
intelligence

Collective intelligence, a term coined by Pierre Levy, was
defined by him as,

“a form of universally
distributed intelligence, constantly enhanced, coordinated in real time, and
resulting in the effective mobilization of skills"

But Levy’s theory of collective intelligence is now dated
and inadequate. Firstly, it has an inadequate definition of ‘collective’ that
has been superseded by recent developments, not only in social media but in
other forms of technology such as AI. Secondly, it has an inadequate definition
of ‘intelligence’ that has been superseded by recent thinking about ‘networks’
and ‘intelligence’.

Networks and
intelligence

More attention needs to be given to the nature and role of
‘networks’ in collective intelligence. Some philosophers posit the idea that
networks are intelligent, to a degree, simply by virtue of being networks. Our
brains are networks, indeed the most complex networks we know of, and
artificial intelligence uses that same (or similar) networked power to interact
with our brains. We do not learn in a linear fashion, like video recordings nor
do we remember things alphabetically or hierarchically. Our brains are networks
with pre-existing knowledge, and intelligence, that needs to fit with other
forms of knowledge from networks.

Connectivism

The theory of connectivism, proposed by Stephen Downes and
George Siemens posits ‘Connectivism’, as a theory where “knowledge is distributed across a network of connections, and therefore
that learning consists of the ability to construct and traverse those networks”.
It is an alternative to behaviourism, cognitivism and constructionism.
‘Connectivism’ focuses on the connections, not the meanings or structures
connected across networks. Intelligence, existing and acquired, is the
practices, by both teachers and learners, that result in the formation and use
of effective networks with properties such as diversity, autonomy, openness,
and connectivity. This challenges the existing paradigms, that do not take into
account the explosion of network technology, as well as presenting a new
perspective on collective use and intelligence. Connectivism can also be used
to bring in newer technological advances and newer agents - such as artificial
intelligence.

Koch argues that the line for consciousness and intelligence
has changed to include animals, even insects, indeed anything with a network of
neurons. He takes includes any communicating network. We have evidence that
consciousness and intelligence is related to networked activity in both organic
brains and non-organic neural networks. Could it be that intelligence is simply
a function of this networking and that all networked entities are, to some
degree, intelligent?

AI and collective
intelligence

There are several recent technological developments that
open up the possibility of collective intelligence. The most important new
technologies is AI (Artificial Intelligence). Artificial Intelligence is
embedded in many online media experiences. This takes us beyond the current
flat, largely text-based, hyperlinked world of text and images, such as
Wikipedia, or even social media, into forms of media that are closer to Lévy’s
original idea of collective intelligence. Networks store knowledge but, with
the advent of online AI, they can also be said to BE intelligent. AI is
intelligence that resides in a network and is intelligent ‘in itself’ but also
adds to the sum of collective intelligence when used by humans.

Machine learning (code that learns and creates code), with
the aid of collective human and machine created data, actually becomes more
‘intelligent’. The more it is used the more intelligent it becomes, sometimes
surpassing the intelligence of humans, in specific domains. As we saw at the
start of this article, it can now also be shared. This is a new species of
shared intelligence that is shared in realtime, directand scalable.

Collective Artificial Intelligence, let’s call it CAI, can
be said to reside in and be an emergent feature of all networks, human and
artificial, organic and non-organic. In other words, the agents of collective
intelligence have to be widened, as does the nature of that intelligence and
the interaction between them all.

Conclusion

We are only just beginning to see and practically explore
and build new forms of collective intelligence that allow teaching and learning
to be done by machines very quickly and on a massive scale. This is
exhilarating and frightening at the same time. We, as humans, are now part of a
networked nexus of human teachers, human learners, AI teachers, AI learners,
networked knowledge, and networked skills. The world has suddenly become a lot
more complex.

(Thanks To Callum Clark for the ideas on Levi and collective
intelligence.)