As shown in Figure 8.5, the predicted user's
action
could be fed back into the time series
doing away with any human input altogether. Note the perfectly
symmetric paths of the data flow. There are no vision systems in
operation. Instead, we only see two graphics systems which depict the
ARL system's actions. The actions are fed back into its short-term
memory which represents the past virtual actions of both user A and
user B.

Unlike the previous operation mode which had at least one human
generating a component of the signal, here, both components are
synthesized. Thus, there is no 'real' signal to ensure some kind of
stability. Since both signals are virtual, the ARL system is more
likely to exhibit instabilities. There is no real data to `pull it
back down to Earth' and these instabilities can grow limitlessly.
Therefore, unless properly initialized, the system will not
bootstrap. Furthermore, even when beginning to show some interaction,
the system often locks up into some looping meaningless
behaviour. Instabilities arise since both halves of the time series
are completely synthesized with no real human tracking. Therefore,
some modifications are being investigated for zero-user, two-computer
configurations. However, this is not the most important mode of
operation and remains a lower priority.

Thus we have enumerate several different modes of operation the ARL
system can encompass. These include behaviour learning, interaction,
simulation, prediction and filtering. The analysis of the ARL
framework at a modular level allows such abstractions as well as the
use of alternate modules and different data flow paths.