Abstract

Objective: Computational concepts from robotics and computer
vision hold great promise to account for major aspects of the
phenomenon of consciousness, including philosophically problematical
aspects such as the vividness of qualia, the first-person character of
conscious experience, and the property of intentionality. Methods: We present a dynamical systems model describing human
or robotic agents and their interaction with the environment. In
order to cope with the enormous information content of the sensory
stream, this model includes trackers for selected coherent
spatio-temporal portions of the sensory input stream, and a
self-constructed plausible coherent narrative describing the recent
history of the agent's sensorimotor interaction with the world. Results: We describe how an agent can autonomously learn its
own intentionality by constructing computational models of hypothetical
entities in the external world. These models explain regularities in
the sensorimotor interaction, and serve as referents for the agent's
symbolic knowledge representation. The high information content of
the sensory stream allows the agent to continually evaluate these
hypothesized models, refuting those that make poor predictions. The
high information content of the sensory input stream also accounts for
the vividness and uniqueness of subjective experience. We then
evaluate our account against 11 feature of consciousness "that any
philosophical-scientific theory should hope to explain", according to
the philosopher and prominent AI critic John Searle. Conclusion: The essential features of consciousness can, in
principle, be implemented on a robot with sufficent computational
power and a sufficiently rich sensorimotor system, embodied and
embedded in its environment.