Augmented cognition: Science fact or science fiction?

We live in a time in which we are overwhelmed by information obtained from multiple sources, such as the internet, television, and radio. We are usually unable to give our undivided attention to any one source of information, but instead give ‘continuous partial attention’ to all of them by constantly flitting between them. The limitations of cognitive processes, particularly attention and working memory, place a ceiling on the capacity of the brain to process and store information. It is these processes that some researchers are aiming to enhance with augmented cognition, an emerging field which aims to use computational technology to enhance human performance in various tasks by overcoming the bottlenecks in processes such as attention and memory.

Whereas brain-computer interfaces enable people to control various aspects of their environment, the goal of augmented cognition is to determine peoples’ cognitive state in order to enhance it. Augmented cognition has many potential military applications, and its proponents promise that it will also greatly improve productivity in the workplace. Hence, the Defense Advanced Research Projects Agency (DARPA) is conducting research in the area, and corporations such as Microsoft are also showing interest and funding research. Research utilizes a three-pronged approach, whereby advances in cognitive and neural science are combined with information technologies from industry and academia to develop technologies for enhancing human cognitive capabilities.

Attention

Before information can be processed in working memory, it must first be attended to by the senses, which are the windows through which the brain perceives its environment. By attention, we mean focusing on a particular aspect of the environment while ignoring others. Psychologists have yet to improve on the definition of the term provided by William James in the late 19th Century:

…attention is…the taking possession by the mind in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought…It implies withdrawal from some things in order to deal effectively with others.

Being tied closely to perception, attention forms the basis of all other cognitive processes, and is perhaps the most extensively studied of all of them. With augmented cognition, researchers hope to optimize the allocation of attention, and to integrate multiple information sources so that the data may be used more efficiently.

Attention-enhancing devices could potentially be very useful if applied to situations in which people are required to make quick decisions within a demanding work environment. If they were to prove beneficial, such devices would need to be programmed to deal with the uncertainty that is often involved in the decision-making process.

One device that is currently being developed is the CogPit, a “smart” cockpit for fighter aircraft of the future. The CogPit uses an electro- encephalogram to take readings of the brain’s electrical activity while a pilot uses the conventional controls of the craft. It is a closed-loop simulation; brain activity is monitored, and specific patterns of brain waves – those associated with stress, for example, trigger the system into action.

By filtering out irrelevant information, the CogPit system could reduce the pilot’s stress levels, enabling them to focus their attention on the most important information. It could provide assistance or, if necessary, take complete control of the aircraft if the pilot is under excess stress. Currently, the CogPit system is fully equipped with flight instruments, including a radar warning receiver which detects surface-to-air missiles, and a “targeting pod” which can locate, track and destroy targets. At the moment, however, it is just a simulation, and has not yet been put to use in a real aircraft. Furthermore, the accuracy with which electroencephalography and similar techniques can determine a person’s stress levels or emotional states is as yet unclear.

Memory

The mechanisms by which the brain generates memories are still elusive, although studies of brain-damaged patients have led cognitive psychologists to develop a number of theoretical frameworks. Within these frameworks, memory is usually sub-divided into a number of distinct but interrelated processes. An influential model was developed in the 1960s by Atkinson and Shiffrin, who thought in terms of short-term memory and long-term memory (hereafter referred to as STM and LTM, respectively). According to this model, all information must first pass through STM before being transferred to LTM. STM can store a limited amount of information for a few seconds. Exactly how the transfer from STM to LTM takes place is unclear, but Atkinson and Shiffrin propose that information needs to be “rehearsed” in order to be consolidated and stored in long-term memory. As far as we know, the capacity of the human brain to store long-term memories is infinite.

Atkinson and Shiffrin’s “modal” model of memory is confirmed by the study of patients with various kinds of amnesia. One famous amnesic is a patient known as H. M., who was described by Brenda Milner. In order to alleviate H. M.’s severe epileptic symptoms, surgeons performed bilateral excisions on his medial temporal lobes. The procedure involved the removal of parts of the hippocampus, and had major consequences. As a result of the surgery, H. M. became profoundly amnesic – he was unable to store any new memories, although those memories that had formed prior to the surgery remained intact. H. M. had unimpaired STM but severely disrupted LTM.

H. M.’s condition is known as anterograde amnesia (the inability to encode new memories). Patients with retrograde amnesia exhibit H. M.’s situation in reverse: LTM remains intact, but STM is impaired. Such patients have no difficulty encoding new memories, but cannot remember those memories encoded before the onset of their amnesia. Both anterograde and retrograde amnesics provide compelling evidence that memory can indeed be subdivided into short-term and long-term stores.

Experiments have shown that STM can store 7 plus or minus 2 pieces of information. This upper limit to the amount of information that can be stored in STM can be increased by a simple process called “chunking”. As an example, try remembering the order of this string of 15 letters: OACBNHLDACBLCNB. Most people have great difficulty in carrying out this memory task. If, however, the 15 letters are remembered as a series of well-known acronyms (ABC, BBC, CNN, DHL, AOL), they are far easier to recall.

In the 1980s, Baddeley and Hitch proposed the term ‘working memory’ to describe the temporary storage and manipulation of information during the performance of a task or solving of problem. Baddeley further suggested the term ‘articulatory loop’ for the rapid verbal repetition of the information that is to be remembered. In principle, the articulatory loop is very similar to the process of rehearsal which Atkinson and Shiffrin suggest is needed for memories to pass from STM into LTM. In fact, the relationship between short-term memory and working memory is unclear. Some researchers consider them to be the same, while others believe them to be distinct but related processes.

Although there has been recent work on drugs that enhance working memory, research into how augmented cognition might do so is virtually non-existent. So how do proponents of augmented cognition think it will enhance human memory capacity? It will, according to Dylan Schmorrow, director of DARPA’s AugCog program:

…circumvent fundamental human limitations by engineering work environments that will make it easier for people to encode, store and retrieve the information presented to them [and] develop interfaces that are context-sensitive by presenting material in relation to the context in which it is encountered. This will be accomplished by embedding information in distinctive, image-able, and multi-sensory contexts, so as to provide memory hooks that naturally engage the human mind.

Skeptics say that augmented cognition is no more than science fiction. As we have seen, memory and, to some extent, attention, are abstract concepts. There is no general consensus on a definition of either term, let alone on how they work. Herein lie the limitations of augmented cognition: it is based on theoretical models of cognitive processes, and it is, therefore, difficult to imagine how one could enhance processes that are not fully understood.

DARPA-funded researchers are well aware of the limitations. This is how one group concluded the paper they presented at the 36th Hawaii International Conference on System Science in 2002:

Although our pilot experiment suggests there may be an advantage of the augmented approach in a specific situation, there is much to be done before we are ready to design routinely attention-enhancing tools to optimize human attention allocation and to incorporate uncertainty in real-time decision-making.

Nevertheless, Schmorrow says that “there have been some profound advances in the last 6 months”. The mission of the AugCog program he directs is:

…to extend, by an order of magnitude or more, the information management capacity of the human-computer warfighting integral by developing and demonstrating quantifiable enhancements to human performance in diverse, stressful, operational environments…[and to] empower one human’s ability to successfully accomplish the functions currently carried out by three or more individuals.

Schmorrow has a vision of a symbiosis between man and machine, resulting ultimately in:

decision forecasting tools which exploit human inquisitiveness…monitoring of decision-maker paths through context-rich knowledge space…continuous, autonomous reconciliation of computer behaviors to human mental models and decision-making needs…[and] system interfaces which help people remember.

Schmorrow believes that exploitation of the “inexorable progress in digital computation and storage [combined with a greater] understanding of human brain function”, makes the realization of these goals inevitable in the near future. According to the agency’s website, all this will be accomplished within 5 years.

__________________________

Here’s a short film entitled The Future of Augmented Cognition, commissioned by DARPA and directed by Alexander Singer, who is probably best known for making episodes of television series such as The Fugitive, Hill Street Blues, Start Trek: The Next Generation and Deep Space Nine. This film is set in the year 2030, and takes place in a command centre which monitors cyberspace activity for threats to the global economy; it is a depiction of DARPA’s vision of how augment cognition will in the future be used to integrate multiple sources of information.

Related

Post navigation

What really bothers me is next question: are the neural corelates of memories for which HM and retrograde amnesiacs are amnesic existing, but unaccessible or simply “washed away”? The answer on that question would actually reveal much more than the question itself is asking for.

Great post. I recently finished Moreno’s book and will review it shortly. A bit of a mystery to me why there is so much emphasis on “augmented cognition” for the human/ computer combo and so little on “trained cognition” for the human part. I would imagine the “Army of One” would, now enlisting so many young soldiers and for such a diversity of assignments, would need to ensure versatility and real-time adaptability, while reducing dependency upon high-tech gadgets. (Having said that, I experienced the powers of “augmented cognition” when I bought my first PalmPilot, and know how useful technology can be. But, the brain itself being the most evolved “high-tech gadget”, we should learn how to use it, in my view).

Rene Magritte said “this is not a pipe”. Yes, “attention” and “memory” are constructs and we still have much to learn. But I disagree with the notion of 1) there are “ceilings” for improvement, 2) and they are very close to where we are today, 3) so “trained cognition” is not an area of both reality and promise. Neuropsychologists can assess improvements and transfer based on independent measures. Those assessments are not perfect, but a “pipe” is not an pipe either.

In the military context, there has been a lot of work on cognitive simulations for development of attentional control, by Prof. Gopher and Prof. Shebilske. You can read this interview with Gopher, and Shebilske’s comment.

Working Memory can be trained, both in kids and adults, ADD/ ADHD and “normal” (most of the published research so far refers to kids with ADD/ ADHD, but there is clinical and empirical evidence for the other groups). The whole field of cognitive rehabilitation for patients with strokes and TBI relies heavily on neuroplasticity and cognitive training; and Michael Merzenich has shown how to train auditory processing.

Finally, I know that West Point has been using Heart Rate Variability-based biofeedback for cadet stress management training, will see if I can get some reference.

I guess the field is so recent that evidence is scattered and not well discussed (on top of not being perfect). Maybe it is time for a journal or conference on this precise topic. Not being a scientist myself, I am not aware if something like this already exists, what I know is that the information seems very fragmented and appearing in different locations.

Alvaro, thanks for the comment and the great links – they tie in nicely.

I’m also reading Moreno’s book and will be reviewing it shortly, but this post was inspired – for want of a better word – by an article published in the Economist last September. I fully agree that rather than relying on technology, we need to learn to harness the real power of our grey matter by natural means. (I am NOT referring to the old myth that we only use 10% of our brains.)

1) Manipulating the brain in the fashion is not possible now, nor will it be in the next 20, 30, 50 years. I am a huge advocate of technology and assume that it will progress in rapid fashion, but know enough about the brain to know this is simply not possible.
2) I have a PhD in neuroscience from a top US academic research institution. Trust that I know what I’m talking about.
3) Besides technical limitations, there are huge practical and ethical problems with this sort of undertaking. These are relatively resolvable compared ot the technical problems, however.
4) DARPA spends huge amounts of money on projects that have essentially no chance of ever becoming reality. It’s unfortunate, but true. This is one of those projects, clearly.