Will AI and Robotics spell the end of human dominance? Forrester Analysts say maybe–in a hundred years or so.

According to Forrester, there are two types of #AI. #PureAI is the type that is commonly depicted in futuristic movies. #PragmaticAI is what is currently being developed and applied to real-world use cases.

Elon Musk, Bill Gates, and Stephen Hawking—to name just a few—have all broached the subject of intelligent robots taking over. And it’s been a favorite topic of Hollywood for decades. Think back to the “Terminator” and “Matrix” franchises, and, more recently, movies such as “Ex Machina.”

A different Hollywood take on AI is the movie “Her,” which features a computer operating system (Samantha, voiced by Scarlett Johansson) so intelligent that it develops an emotional attachment to the movie’s protagonist, Theodore (played by Joaquin Phoenix). This take on AI ups the anti from militant, oppressive robotics to a much higher level of sophistication—human emotions.

The question is this: Will any of us live to see a Samantha-like operating system or a robotic dominance on planet Earth?

Forrester on AI

In a Forrester Research Podcast, “The Risks and Rewards of Artificial Intelligence,” Analysts Mike Gualtieri and TJ Keitt label the Hollywood vision of Artificial Intelligence as Pure AI. The two analysts mention that some researchers predict that “100 years from now, we have a 75% chance of having Pure AI.” I was actually surprised by the immediacy of this estimate. It’s coming much faster than I thought.

The two analysts point out that Pure AI is one of two major classifications of artificial intelligence. The other major division is Pragmatic AI, or the AI that is actually being developed and applied to real-world use cases.

Pragmatic AI

Gualtieri and Keitt delineate eight types of Pragmatic #AI. (note that they said “nine” in the podcast, but I counted only eight), which can be applied alone or in combination with other types of AI. The eight types are as follows:

Machine learning (used to create predictive models. Gualtieri and Keitt mention that this type of AI is most useful for enterprise applications.)

Speech recognition

Image and video analysis

Sensory perception (In many cases, for AI to be “smart” it needs to have sensory input.)

Natural language understanding (NLU, which is used for contextual analysis.)

Natural language generation (NLG)

Robotics

Real-World Concerns with Pragmatic AI

It’s clear from the podcast that Gualtieri and Keitt consider that any AI—even the pragmatic variety—poses genuine social problems that need to be overcome. One of their big concerns is data bias. The fact is that, like actual intelligent beings, the artificial kind need to be taught how to think and behave. This training results from sample data sets being fed into the system, in fact training the system in the process.

An example the analysts use is facial recognition, which they say is inherently biased by training data sets. They highlight the fact that facial recognition systems designed in Asia are very good at identifying minute distinctions in facial features among Asians, which make up the large majority of test subjects used to train the system. But the same systems tend to be less accurate for a non-Asian population.

The exact analog exists for systems built in North America, for example, where systems might be good at distinguishing one Caucasian face from another but not effective for ethnic minorities. The implications are profound and potentially catastrophic, especially in law-enforcement applications where inaccurate facial recognition could influence dire outcomes.

Data Bias Expanded

Gualtieri and Keitt expand on the data bias problem by pointing out that any AI system is likely to have the biases of its creators built into it—the natural result of the data chosen to train the system. For me, that’s perhaps the most telling point of all—that AI will be no more objective in predictions or conclusions that its human counterparts.