From Imitation Games To The Real Thing: A Brief History Of Machine Learning

hile the ancient Greeks obviously didn’t have anything like artificial intelligence (AI) or machine learning (ML), at least they dreamt of something akin to it. Hephaestus, the Greek god of blacksmiths, metalworking and carpenters, was said to have fashioned artificial beings in the form of golden robots.

Myth finally moved toward truth in the 20th century, as AI developed in series of fits and starts, finally gaining major momentum—and reaching a tipping point—by the turn of the millennium. Here’s how the modern history of AI and ML unfolded, starting in the years just following World War II.

The 1950s: The Birth Of Modern AI

In 1950, while working at the University of Manchester, legendary code breaker Alan Turing (subject of the 2014 movie The Imitation Game) released a paper titled “Computing Machinery and Intelligence.” It became famous for positing what became known as the “Turing test.” Starting with the general question, “Can machines think?,” Turing asked, “Are there imaginable digital computers which would do well in the imitation game?” In this game, an interrogator asks a computer and a human the same question and tries to determine which player is which based on the answers they provide.

A year later, Marvin Minsky built SNARC, the first artificial neural network computer, while earning his Ph.D. in mathematics at Princeton. Assembled from parts that included a surplus gyropilot from a B-24 bomber, the SNARC simulated a rat finding its way through a maze. In the decades that followed, Minsky would play a prominent and sometimes controversial role in the history of AI.

In 1956, cognitive scientist John McCarthy introduced the term “artificial intelligence” at a summer workshop known as the Dartmouth Conference. The extended brainstorming session on the Dartmouth campus (pictured above) lasted roughly eight weeks and laid the groundwork for exploring how to make machines capable of abstract thought, problem solving and self-improvement. It was, by many accounts, the gathering where AI was born.

Just a year later, Frank Rosenblatt invented the perceptron algorithm at the Cornell Aeronautical Laboratory. One of its first implementations was in the “Mark 1 Perceptron,” a machine designed for image recognition. Described as “a pattern learning and recognition device,” the Mark 1 consisted of 400 photocells, randomly connected to machine neurons: the first prototype neural network.

The 1960s: From Expert Systems To Experts’ Pessimism

While McCarthy was a professor at Stanford, Edward Feigenbaum, a founder of the university’s computer science department, led a project that developed the very first expert system. Such systems emulate the decision-making ability of a human expert, solving complex problems by reasoning knowledge. In fact, Feigenbaum’s team used the LISP programming language developed by McCarthy. Computerized expert systems would later become a hit in the business world.

Yet by 1969, interest in neural network research declined after Minsky and Seymour Papert published their book, Perceptrons. In it, they demonstrated that it was impossible for these classes of networks to learn an XOR, or “exclusive or” function. This is a two-input logic operation where the output is true if one input is true and the other false; if both inputs are true or both false, the output is false. The glum conclusions of the authors are commonly cited today as changing the direction of AI research—and not for the better.

The 1970s And ’80s: A Double Dose Of AI Winter

Sometime around 1974, pessimism among researchers and decreased funding from government and private sources cast a pall on commercial development and academic research around AI. It was referred to as the “AI winter,” and just as Minsky played a role in this one, he would in the next.

Fueled by enthusiasm and speculation, AI and early machine learning technologies became the focus of attention and investment once again in the early 1980s. Then came 1984 and the fateful annual meeting of what was then known as the American Association of Artificial Intelligence. There, Minsky and AI theorist Roger Schank warned the business community that investor enthusiasm for artificial intelligence would eventually lead to disappointment. By 1987, AI investment began to collapse—hitting rock bottom by 1990.

Keeping The Flame Of AI And Machine Learning Alive

Yet even as investors fled, keen academics and researchers kept forging ahead. In 1986, Geoffrey Hinton, a professor at Carnegie Mellon University, became one of the first researchers to describe a new learning procedure—the back-propagation algorithm—for training multi-layer neural networks. It marked a major step forward from the perceptron model that had stalled out AI research in the 1970s.

Yann LeCun, a pioneer in the field of AI, is now VP and Chief AI Scientist at Facebook.Facebook

Around that time, a former student of Hinton’s, French scientist Yann LeCun, began working at AT&T Bell Laboratories, where he developed a number of new machine learning methods. These included the convolutional neural network. Essentially a multi-layered perceptron, it was modeled after the visual cortex in animals. LeCun’s work contributed to the advancement of image and video recognition, as well as natural language processing. He remains a giant in AI and machine learning to this day.

AI also received a big boost during the mid-1990s. In 1991, the U.S. military began using the Dynamic Analysis and Replanning Tool (DART), an artificial intelligence program that optimized the transport of supplies and personnel, while solving strategic planning challenges. In four years, DART saved enough money in the Gulf War to recoup 30 previous years’ worth of DARPA-funded research.

The 1990s And Beyond: AI And Machine Learning’s Eternal Spring

From the mid-1990s, AI and machine learning have surged at breakneck speed, moving from breakthrough to breakthrough. A major portion of the credit goes to the growth and advancement of computing power, which fell in line with the early forecasts of Intel co-founder Gordon Moore and “Moore’s Law.”

Moore’s Law: The number of circuits on a chip doubled every year. In 1975, Gordon Moore modified this projection to doubling every two years.

That brings us to today, and the realization of much that Turing, McCarthy and other trailblazers envisioned. In a 2017 lecture to students at the Stanford Graduate School of Business, Andrew Ng, chief scientist at Chinese internet giant Baidu, declared that AI and machine learning have entered an “eternal spring.”

The title of his talk was fitting: “Artificial Intelligence Is The New Electricity.” In it he cites three reasons for AI and machine learning’s current vigor:

The availability of big data

Supercomputing power to process it over large neural networks

Modern algorithms

Imagine what the ancient Greeks would think. Compared to the brilliant future of machine learning, golden robots pale in comparison.

Forbes Insights is the strategic research and thought leadership practice of Forbes Media. By leveraging proprietary databases of senior-level executives in the Forbes community, Forbes Insights conducts research on a wide range of topics to position brands as thought leade...