The Virtual Autonomous Learner: An Introduction

This post, an introduction to my thesis topic, a virtual autonomous learner or VAL for short, is aimed at a general audience. Though, I am sure anyone with research interest in: evolution, ecology, artificial intelligence, neuroscience, artificial neural networks, education, virtual simulation, computation, and all things related to the body and the mind, especially embodied cognition, would find this post interesting.

For those of you with short attention spans like myself, I offer you this quick and simple explanation of what VAL is:

virtual autonomous learner = artificial intelligence.

Simple, right? However, if you stick around I will explain why this equation is not really true. And, in fact, VAL is not AI – it’s something new.

They say1 it’s harder to explain something new, so with a bit of apprehension here goes…

Terminology & Concepts 1

An Agent, in computer simulation and artificial intelligence, is an autonomous entity which observes and acts upon and within an environment.

A Construct, in computer science, is the framework of software or of an application. It’s the environment for the agent.

Getting Started

The best place for me to start2 is to start with something that people can relate to – in this case it will be film3, more specifically computer animated films.

Think about your favourite 3D animated movie, for example Toy Story, Finding Nemo, The Incredibles or even hybrid films4 like Avatar and Star Wars. Now think about your favourite 3D character (my favourite, without a hesitation has to be Neytiri form the movie Avatar). Your favourite 3D character was built in a 3D environment5 which is similar to the virtual autonomous learner’s environment.

The difference is…

Your favourite character is driven6 by (1) motion capturing techniques (like the movie Avatar) or (2) it is animated via frame-by-frame animation7. For the purpose of this post I am referring to the latter. Frame-by-frame animation is a relatively simple process that gives characters motion, action and behaviour. This process is the same for all animation. The soul8 purpose of this technique (the end goal) is to make the character come alive.

Animating a character (bringing it to life) is a process that transposes the character through time and space. For example, an animator places the character in a pose, at point A and at time A then places the character in a pose, at point B and at time B; as time goes by the character is transposed from point A to point B. The result is movement from A to B. This movement (action) simulates behaviour.

The point I am trying to make here is that your favourite character’s behaviour (action) is not real – the animator is a puppet master – he is controlling the movement, the action, and the behaviour.

Similarly, another illusion of a 3D character being alive is the online 3D virtual world of Second Life.

So, all this talk about 3D characters, virtual worlds and puppet masters is to prime your imagination. However, there are a few more technical elements (Terminology & Concepts) I have to mention before I put it all together and paint a clearer picture of what the virtual autonomous learner is and what it potentially can do.

Terminology & Concepts 2

Artificial neural networks (ANN), are biologically inspired computation devices that simulate the neural networks in all animals with a nervous system. ANN technology is widely used in economics, medicine, and in the gaming industry. Two relevant functions of ANN (that are applicable to VAL) are pattern recognition and motor control. 9

Dynamic physics simulation engine is a technology that is used in conjunction with 3D environments. Basically, as its name implies, it simulates the physics of our real world – most importantly gravity.

A dynamic algorithm is a process or a series of steps that is in a continuous loop. Artificial neural networks can be executed as a dynamic algorithm.

Putting it together

Now let’s put all together. Here is where you have to use your imagination (I would say close your eyes – but then you would not be able to read this). Take your 3D character and place it standing in a 3D virtual world. You are the puppet master of your character. If you sever the proverbial puppet’s strings and if your environment has a dynamic physics engine your character will fall to the virtual ground like a rag doll.

It falls because it has nothing controlling it.

That’s where the artificial neural networks (motor control) come into this picture. Attaching an ANN to your character makes it autonomous. Since, the neural-net is controlling the character and it is also integrated within the character – by definition it is controlling itself. This control process (motor control) is the output. Then what is the input – you ask…

The input, actually multiple inputs, is also based on artificial neural networks, and are the sensory inputs. These input devices, of the virtual character, can simulate our five senses (sight, smell, touch, taste, and hearing) in a virtual environment. This sensory process (pattern recognition) is the input.

What I have described so far is an input/output system; a dynamic algorithm. But this algorithm alone cannot develop into intelligent behaviour (or any other kind of behaviour) because the algorithm is not complete. What lies in between the input and the output is a type of information (massively distributed) processing. I, and other researchers, believe the main goal of this process is survival and the most important element of this processing is learning.

The actual shape and form of the process is of great debate and it is called the cognitive architecture of the agent (the agent being VAL).

In the beginning of this post I mentioned that VAL is not AI – it’s something new and the reason for this is as follows:

AI (artificial intelligence) research focuses on symbolic processing as a means to achieve intelligent behaviour – it is not concerned with the body or the environment – it is just concerned with the mental process of the mind.

VAL (virtual autonomous learner) is set up in a construct that simulates the body and its environment as a means to develop intelligence. With this perspective, the mind is embodied, and cognition is grounded in sensory-motor processes.

The Construct

The virtual autonomous learner is a 3D agent living in a 3D environment (the construct).

The software allows researchers/students to manipulate cognitive variables within the 3D agent.

The construct is built with open-source neural network simulation software & customized plug-ins.

Dynamic algorithms simulate behaviour.

To facilitate the acquisitionof appropriate behaviour a rapid prototyping methodology is used.

Phase one

Presently, I am racking my brain over how to organize all of the features and elements. That is, I am in the process of developing the graphic user interface for the VAL construct. Soon I will post screen shots of the three main modules of the application. As it stands now, the three modules are: (1) virtual autonomous learner’s 3D environment, (2) developmental & learning interventions, and (3) curriculum data tracking.

Pattern recognition is an input processing system and motor control is an output system. In a future post, dedicated to ANN, I will articulate in detail how they work, but for now, all you need to know is they can do really amazing things ↩