A collection of observations, news and resources on the changing nature of innovation, technology, leadership, and other subjects.

June 11, 2018

The Impact of AI on R&D and Innovation

For the past couple of centuries, general-purpose technologies (GPTs) have been the key drivers of productivity and economic growth, - thanks to their rapid improvements in price and performance, pervasive applications, and ability to spur complementary innovations across a wide variety of industries. The steam engine, electricity and the internal combustion engine are prominent examples of GPTs in the 18th and 19th century.More recently, semiconductors, computers and the Internet have led to the digital revolution of the past several decades.

Beyond innovations in existing sectors, the rapidly improving price/performance of GPTs have led over time to the creation of whole new applications and industries. For example, the steady declines in the price of electricity-generated power and the improvement in the efficiency of electric motors led to the radical transformation of manufacturing in the early part of the 20th century with the advent of the assembly line. It also led to the creation of the consumer appliance industry, e.g., refrigerators, dishwashers, washing machines. Similarly, as the semiconductor industry took off, it led to the historical transition from the industrial economy of the past two centuries to our ongoing digital economy.

How about artificial intelligence? Beyond its use by leading edge technology companies, we’re still in the early stages of AI deployment.It’s only been in the last few years that major advances in machine learning have taken AI from the lab to early adopters in the marketplace.While considerable innovations and investments are required for its wider deployment, AI is likely to become one of the most important GPTs in the 21st century.

Last year I heard a simple, compelling explanation for AI as a GPT at a seminar by University of Toronto professor Avi Goldfarb, who along with colleagues has been conducting research on the economics of machine intelligence.In a 2017 article, they wrote that the best way to assess the economic impact of a new radical technology is to look at how the technology reduces the cost of a widely used function.

For example, the computer revolution can be viewed as being all about the dramatic reductions in the cost of arithmetic calculations.Before the advent of computers, arithmetic was done by humans with the aid of various kinds of devices, from the abacus to mechanical and electronic calculators.Then came digital computers, which are essentially powerful calculators whose cost of arithmetic operations has precipitously declined over the past several decades thanks to Moore’s Law.Over the years, we’ve learned to define all kinds of tasks in terms of such digital operations, e.g., inventory management, financial transactions, word processing, photography.Similarly, the economic value of the Internet revolution can be described as reducing the cost of communications and of search, thus enabling us to easily find and access all kinds of information, - including documents, pictures, music and videos.

Viewed through this lens, our emerging AI revolution can be viewed as reducing the cost of predictions, - that is, what’s likely to happen in the future, - based on the explosive growth of big data, powerful and inexpensive computer technologies, and advanced machine learning algorithms. Given the widespread role of predictions, in business, government, and our everyday lives, AI is most definitely a GPT that’s already having a big impact on a wide range of applications.

The authors argue that AI, - and deep learning in particular, - is actually a new kind of research tool which will open up new avenues of inquiry across a broad set of domains, - an invention of a method of inventing.Such inventions not only reduce the costs of specific innovation activities, but actually enable a new approach to innovation itself, “altering the playbook in the domains where the new tools are applied.”

Throughout history, scientific revolutions have been launched when new research tools make possible new measurements and observations, e.g., the telescope, the microscope, spectrometers, DNA sequencers.They’ve enabled us to significantly increase our understanding of the natural world around us by collecting and analyzing large amounts of data.Big data and AI learning algorithms are now ushering such a scientific revolution.

Moreover, these new research tools can be applied to just about any domain of knowledge, given that we can now gather data in almost any area of interest and to then analyze the data with increasingly sophisticated AI algorithms.In particular, machine learning methods have great potential in research problems requiring classification and prediction, given their ability to dramatically lower costs and improved performance in R&D projects where these represent significant challenges.

“One the one hand, AI based learning may be able to substantially automate discovery across many domains where classification and prediction tasks play an important role.On the other, they may also expand the playbook in the sense of opening up the set of problems that can be feasibly addressed, and radically altering scientific and technical communities’ conceptual approaches and framing of problems… The challenge presented by advances in AI is that they appear to be research tools that not only have the potential to change the method of innovation itself but also have implications across an extraordinarily wide range of fields.”

If advances in AI-based learning represent the arrival of a powerful, general purpose research tool, there will likely be significant economic, social, and technological consequences.On the positive side, “the resulting explosion in technological opportunities and increased productivity of R&D seem likely to generate economic growth that can eclipse any near-term impact of AI on jobs, organizations, and productivity.”

However, it’s important to develop policies that enhance innovation in a way that promotes competition and social welfare.“While the underlying algorithms for deep learning are in the public domain (and can and are being improved on rapidly), the data pools that are essential to generate predictions may be public or private, and access to them will depend on organizational boundaries, policy and institutions… This suggests that the proactive development of institutions and policies that encourage competition, data sharing, and openness is likely to be an important determinant of economic gains from the development and application of deep learning.”