How Policy Decisions Could Shape the Development and Implementation of AI

By TAP Staff Blogger

Posted on August 23, 2018

Share

In an article recently written for the World Economic Forum, Rotman School of Management professors Joshua Gans, Avi Goldfarb, and Ajay Agrawal discuss how policy decisions shape the development and implementation of new technologies. Specifically examining artificial intelligence (AI), “AI Is Coming, So Economic Policy Needs to Be Ready” examines two aspects of policy: regulatory and mitigating potential negative consequences.

Regulatory policy can have an impact on the speed of diffusion of the technology and the form that the technology takes.

Policies that address the consequences of technology can address the impact on labor markets and antitrust concerns.

Machine learning uses data to make predictions. The biggest constraint on AI in many settings is the ability to acquire useful data. This creates privacy concerns. Therefore, privacy policy has a direct impact on the ability of organisations to build and implement AI. Too little privacy protection means that consumers may be unwilling to participate in market transactions where their data are vulnerable. Too much privacy regulation means that firms cannot use data to innovate.

This means that any government strategy focused on AI – particularly with the aim of fostering a local AI industry – should weigh the potentially conflicting interests of data producers and users, especially with respect to privacy. Perhaps more than any other regulation, rules around privacy are likely to influence the speed and direction of the application of AI in practice.

Policies that address the consequences of AI

A common worry about AI concerns the potential impact on jobs. If machines can do tasks normally requiring human intelligence, will there be jobs left for humans? In our view, this is the wrong question. There are plenty of horrible jobs. Furthermore, more leisure is generally considered to be a positive development, although some have raised concerns about the need to find alternate sources of meaning (Stevenson 2018). The most significant long-run policy issues relate to the potential changes to the distribution of the wealth generated by the widespread use of AI. In other words, AI may increase inequality.

If AI is like other types of information technology, it is likely to be skill-biased. The people who benefit most from AI will be educated people who already are doing relatively well. These people are also more likely to own the machines. Policies to address the consequences of AI for inequality relate to the social safety net.

Another policy question around the diffusion of AI relates to whether it will lead to monopolisation of industry. The leading companies in AI are large in terms of revenue, profits, and especially market capitalisation (high multiples on earnings). … The feature that makes AI different is the importance of data. Firms with more data can build better AI. Whether this leads to economies of scale and the potential for monopolisation depends on whether a small lead early in the development cycle creates a positive feedback loop and a long-run advantage.