The father of the twins is HIV-positive, and according to an Associated Press interview, He said that his motivation for the experiment was “to offer couples affected by HIV a chance to have a child that might be protected from a similar fate.” Decades of research have revealed that CCR5 is a gene that allows HIV to infect a cell. In fact, there is a naturally occurring mutation called CCR5-delta 32 that is found in about 15 percent of people with European heritage, and individuals with two copies of this mutation are resistant to HIV infection. He used the CRISPR/Cas9 gene editing tool to edit CCR5, disable it, and render Lulu and Nana resistant to HIV infection forever.

CRISPR/Cas9

CRISPR/Cas9 as a research tool is relatively new, and more research is needed to understand the boundaries of safe use for human therapy. For these reasons, editing human genes beyond laboratory cell lines or embryo stages of development has been banned in many countries. He proceeded with this work despite a ban in China, and the Chinese government has since ordered an “immediate investigation”. He has confirmed there is a second pregnancy underway with gene-edited embryos, and others could soon appear. Supporters of genetic engineering are excited, but critics of He’s experiments believe that the potential risks outweigh any benefit to be gained. Besides the fact that human gene editing as a therapeutic tool was done in secret, Lulu and Nana would have to live with the consequences and be the harbingers of an uncertain future. Now that this claim has been made, society must prepare for the possibility of genetically engineered humans.

The Cartagena Protocol

Though the wider impacts will play out over time, an immediate concern comes from the international agreement that guides the use and regulation of genetically modified organisms. The Cartagena Protocol on Biosafety to the Convention on Biological Diversity defines a living modified organism (LMO) as one that “possesses a novel combination of genetic material obtained through the use of modern biotechnology” that is “capable of transferring or replicating genetic material.” Lulu and Nana would fall within the definition of an LMO, and for the first time, most if not all regulations mentioned in the Cartagena Protocol apply to humans.

The Protocol allows states to ban imports of living modified organisms if they feel there is not enough scientific evidence of the safety of a product and it requires exporters to label shipments containing them. Under these rules, will Lulu and Nana be prevented from leaving China and traveling abroad? As LMOs, would their reproductive rights be regulated for safety reasons according to the Cartagena Protocol?

The future of genetically engineered humans requires revisiting international agreements and the creation of new laws that protect society and individuals. As the future of genetically engineered humans begins to unfold, the UN Convention of Biological Diversity, scientific societies, and society at large need to come up with solutions. Should the Protocol include a special section with rules and regulations applicable to genetically altered humans, or perhaps exclude them entirely? In its current form, the Cartagena Protocol is inhumane because some of its rules and regulations would violate human rights. For now, we wish the best for Lulu and Nana in the new era that they signal. We hope they will be treated with the respect and dignity that all humanity deserves.

]]>
By John Malone, Bart Kolodziejczyk
Last month, MIT Technology Review reported some shocking news: Chinese scientist He Jiankui claims to have led an unprecedented experiment to genetically edit human embryos and see them carried to birth. He’s results have not been independently verified, but if true, the twins named Lulu and Nana would be the world’s first genetically engineered humans.
The father of the twins is HIV-positive, and according to an Associated Press interview, He said that his motivation for the experiment was “to offer couples affected by HIV a chance to have a child that might be protected from a similar fate.” Decades of research have revealed that CCR5 is a gene that allows HIV to infect a cell. In fact, there is a naturally occurring mutation called CCR5-delta 32 that is found in about 15 percent of people with European heritage, and individuals with two copies of this mutation are resistant to HIV infection. He used the CRISPR/Cas9 gene editing tool to edit CCR5, disable it, and render Lulu and Nana resistant to HIV infection forever.
CRISPR/Cas9
CRISPR/Cas9 as a research tool is relatively new, and more research is needed to understand the boundaries of safe use for human therapy. For these reasons, editing human genes beyond laboratory cell lines or embryo stages of development has been banned in many countries. He proceeded with this work despite a ban in China, and the Chinese government has since ordered an “immediate investigation”. He has confirmed there is a second pregnancy underway with gene-edited embryos, and others could soon appear. Supporters of genetic engineering are excited, but critics of He’s experiments believe that the potential risks outweigh any benefit to be gained. Besides the fact that human gene editing as a therapeutic tool was done in secret, Lulu and Nana would have to live with the consequences and be the harbingers of an uncertain future. Now that this claim has been made, society must prepare for the possibility of genetically engineered humans.
The Cartagena Protocol
Though the wider impacts will play out over time, an immediate concern comes from the international agreement that guides the use and regulation of genetically modified organisms. The Cartagena Protocol on Biosafety to the Convention on Biological Diversity defines a living modified organism (LMO) as one that “possesses a novel combination of genetic material obtained through the use of modern biotechnology” that is “capable of transferring or replicating genetic material.” Lulu and Nana would fall within the definition of an LMO, and for the first time, most if not all regulations mentioned in the Cartagena Protocol apply to humans.
The Protocol allows states to ban imports of living modified organisms if they feel there is not enough scientific evidence of the safety of a product and it requires exporters to label shipments containing them. Under these rules, will Lulu and Nana be prevented from leaving China and traveling abroad? As LMOs, would their reproductive rights be regulated for safety reasons according to the Cartagena Protocol?
The future of genetically engineered humans requires revisiting international agreements and the creation of new laws that protect society and individuals. As the future of genetically engineered humans begins to unfold, the UN Convention of Biological Diversity, scientific societies, and society at large need to come up with solutions. Should the Protocol include a special section with rules and regulations applicable to genetically altered humans, or perhaps exclude them entirely? In its current form, the Cartagena Protocol is inhumane because some of its rules and regulations would violate human rights. For now, we wish the best for Lulu and Nana in the new era that they signal. We hope they will be treated with the respect and dignity that all humanity deserves. By John Malone, Bart Kolodziejczyk
Last month, MIT Technology Review reported some shocking news: Chinese scientist He Jiankui claims to have led an unprecedented experiment to genetically edit human embryos and see them carried to birth.https://www.brookings.edu/blog/the-avenue/2018/12/17/tech-is-still-concentrating-in-the-bay-area-an-update-on-americas-winner-take-most-economic-phenomenon/Tech is (still) concentrating in the Bay Area: An update on America’s winner-take-most economic phenomenonhttp://webfeeds.brookings.edu/~/587925604/0/brookingsrss/topics/technology~Tech-is-still-concentrating-in-the-Bay-Area-An-update-on-America%e2%80%99s-winnertakemost-economic-phenomenon/
Mon, 17 Dec 2018 18:09:15 +0000https://www.brookings.edu/?p=553530

]]>
By Mark Muro, Jacob Whiton

The hope persists among tech and urban optimists for what Revolution LLC funder Steve Case calls “the rise of the rest“—the spread of tech companies into the Heartland.

In fact, recent announcements from Amazon, Google, and Apple—which are adding high-level jobs away from Seattle and the Bay Area—encourage such hope, with their hints that the tech giants are increasingly outgrowing their West Coast roots. Maybe Big Tech really is going to take its incessant talent hunt—and economic contributions—into new places and seed wider-spread economic vitality at a time of economic divides.

So what’s the reality when we look closer? Unfortunately, the story isn’t great, despite the recent news. Building on our last look at tech locational trends from March 2017, this new analysis of job-creation in four key digital services industries—software publishing, data processing and hosting, computer systems design, and web-publishing/search—finds again that while employment in tech is growing all over America, it really isn’t “spreading out” in terms of more cities gained increased shares of the tech pie. To the contrary: By our measure tech has continued to concentrate in a short list of metros during the last few years. The upshot: “Winner-take-most” in tech seems more the rule than the hoped-for “rise of the rest.”

While employment in tech is growing all over America, it really isn’t “spreading out” in terms of more cities gained increased shares of the tech pie.

To be clear, tech remains a compelling contributor to regional growth, and is in fact growing in new places. Digital services continue as a critical part of the national economy, and accounted for fully 80 percent of the nation’s advanced industries growth from 2015 to 2017 as employment grew 4.2 percent a year based on compound annual growth rate (CAGR) calculations.

Likewise, unexpected Heartland metros far from the coastal tech hubs like the Bay Area and Seattle and Boston surfaced as fast-growing tech centers in the recent period. Among the 100 largest U.S. metros, for example, Wichita, Kan.; Lakeland-Winter Haven, Fla.; Chattanooga, Tenn.; Boise, Idaho; and Orlando, Fla. all posted digital services growth of almost 10 percent a year over the same period. Midwestern stalwarts Kansas City, Mo.-Kan.; Madison, Wisc.; and St. Louis have all seen growth of more than 4 percent a year.

In short, there’s no doubt that tens of thousands of digital services jobs—central to the current artificial intelligence-driven tech boom—are sprouting up in more up-and-coming inland towns and bringing with them growth, hope, good pay, and attractive multiplier effects.

But, even though more cities are enjoying the growth of tech jobs, the sector is in fact concentrating even faster than it was a few years ago. This dynamic may reflect the rising importance of early-stage work in AI and machine learning. Or it might reflect the depressing persistence of groupthink. But at any rate, the numbers are eye-popping.

The top five metros with the highest share of digital services account for 28 percent of all of these jobs nationwide, and the top 10 metros with the highest share of digital services now encompass 44.3 percent of all of these jobs across the nation (based on their national shares of such sectors in 2017). The same top 10 metros captured almost half (49.1 percent) of the new tech jobs created from 2015 to 2017, with eight of these metros—including San Francisco, Seattle, San Jose, Los Angeles, and Austin—all increasing their share of the nation’s tech work. Those five metros alone captured 34 percent of all new digital services job growth and increased their share of the nation’s core tech employment by 1.2 percentage points.

Consider further that the super-rich tech folks—epitomized by San Francisco and San Jose—got even richer in the last two years. San Francisco alone added over a tenth of the entire nation’s new digital services jobs (over 25,000), and San Jose increased its share of the nation’s sector by nearly 18,000 jobs. Together, the two Bay Area hubs now encompass 10.7 percent of the nation’s digital services employment, up from 10.1 percent in 2015, 8.9 percent in 2013, and 7.5 percent in 2010. Note too that virtually all of Amazon’s and Apple’s newly announced workforce locations will take place in the biggest 10 of America’s “superstar” metros.

Only a few cities in the rest of the country truly “rose” in the last couple years by expanding their share of the nation’s digital services employment.

Notably, just nine of the largest 100 metros in the nation increased their share of the sector by more than one-tenth of a percentage point. These “winners” of the last few years included San Francisco, Seattle, San Jose, Los Angeles, Austin, Denver, Orlando,Kansas City, and Charlotte.

With that said, 31 more cities at least increased their share of the nation’s digital services tech sector, albeit by less than one tenth of a percent. This group was led by Portland, Ore., and included up-and-coming coastal, interior, Midwestern, or Southern centers like Salt Lake City; Atlanta; Charleston, S.C.; San Diego; Nashville, Tenn.; Raleigh, N.C.; Provo, Utah; Grand Rapids, Mich.; Madison, Wisc.; and Greenville, S.C. Although many of these cities made steady progress, they are not significantly increasing their share of the national digital services sector or demonstrating significant competitiveness.

Another 60 cities actually lost share of the sector due to slow or negative growth. This list included numerous larger metros. In this regard, metros with the largest digital services employment share declines between 2015 and 2017 include such hot tech stories as Washington, D.C. and New York (which saw their shares of the national industry slip by -0.3 and -0.2 percentage points) as well as Houston, Philadelphia, and Dallas (all of which saw their shares slip by -0.2 points). Washington and New York, of course, needn’t worry too much about the future given Amazon’s recent decision to commence significant hiring for two new “headquarters” facilities. Nevertheless, the large number of places that lost share in the years 2015 to 2017 gives pause about dozens of important U.S. cities.

In short, these new data on the geography of tech are disconcerting for those thinking the U.S. would do better with a more balanced economic map.

Even while tech continues to raise hopes for broad transformation, it is continuing to reflect—and drive—the winner-take-most nature of the American economy.

]]>
https://www.brookings.edu/wp-content/uploads/2018/12/2018.12.17_Metro_Muro-Whiton_Tech-concentration_Related.jpg?w=276By Mark Muro, Jacob Whiton
The hope persists among tech and urban optimists for what Revolution LLC funder Steve Case calls the rise of the rest“—the spread of tech companies into the Heartland.
In fact, recent announcements from Amazon, Google, and Apple—which are adding high-level jobs away from Seattle and the Bay Area—encourage such hope, with their hints that the tech giants are increasingly outgrowing their West Coast roots. Maybe Big Tech really is going to take its incessant talent hunt—and economic contributions—into new places and seed wider-spread economic vitality at a time of economic divides.
So what’s the reality when we look closer? Unfortunately, the story isn’t great, despite the recent news. Building on our last look at tech locational trends from March 2017, this new analysis of job-creation in four key digital services industries—software publishing, data processing and hosting, computer systems design, and web-publishing/search—finds again that while employment in tech is growing all over America, it really isn't “spreading out” in terms of more cities gained increased shares of the tech pie. To the contrary: By our measure tech has continued to concentrate in a short list of metros during the last few years. The upshot: “Winner-take-most” in tech seems more the rule than the hoped-for rise of the rest.”
While employment in tech is growing all over America, it really isn't “spreading out” in terms of more cities gained increased shares of the tech pie.
To be clear, tech remains a compelling contributor to regional growth, and is in fact growing in new places. Digital services continue as a critical part of the national economy, and accounted for fully 80 percent of the nation's advanced industries growth from 2015 to 2017 as employment grew 4.2 percent a year based on compound annual growth rate (CAGR) calculations.
Likewise, unexpected Heartland metros far from the coastal tech hubs like the Bay Area and Seattle and Boston surfaced as fast-growing tech centers in the recent period. Among the 100 largest U.S. metros, for example, Wichita, Kan.; Lakeland-Winter Haven, Fla.; Chattanooga, Tenn.; Boise, Idaho; and Orlando, Fla. all posted digital services growth of almost 10 percent a year over the same period. Midwestern stalwarts Kansas City, Mo.-Kan.; Madison, Wisc.; and St. Louis have all seen growth of more than 4 percent a year.
Download the data appendix »
In short, there's no doubt that tens of thousands of digital services jobs—central to the current artificial intelligence-driven tech boom—are sprouting up in more up-and-coming inland towns and bringing with them growth, hope, good pay, and attractive multiplier effects.
But, even though more cities are enjoying the growth of tech jobs, the sector is in fact concentrating even faster than it was a few years ago. This dynamic may reflect the rising importance of early-stage work in AI and machine learning. Or it might reflect the depressing persistence of groupthink. But at any rate, the numbers are eye-popping.
The top five metros with the highest share of digital services account for 28 percent of all of these jobs nationwide, and the top 10 metros with the highest share of digital services now encompass 44.3 percent of all of these jobs across the nation (based on their national shares of such sectors in 2017). The same top 10 metros captured almost half (49.1 percent) of the new tech jobs created from 2015 to 2017, with eight of these metros—including San Francisco, Seattle, San Jose, Los Angeles, and Austin—all increasing their share of the nation's tech work. Those five metros alone ... By Mark Muro, Jacob Whiton
The hope persists among tech and urban optimists for what Revolution LLC funder Steve Case calls the rise of the rest“—the spread of tech companies into the Heartland.https://www.brookings.edu/research/the-folly-of-trolleys-ethical-challenges-and-autonomous-vehicles/The folly of trolleys: Ethical challenges and autonomous vehicleshttp://webfeeds.brookings.edu/~/587883836/0/brookingsrss/topics/technology~The-folly-of-trolleys-Ethical-challenges-and-autonomous-vehicles/
Mon, 17 Dec 2018 05:01:57 +0000https://www.brookings.edu/?post_type=research&p=551510

]]>
By Heather M. Roff

Introduction

Often when anyone hears about the ethics of autonomous cars the first thing to enter the conversation is “the Trolley Problem.” The Trolley Problem is a thought experiment where someone is presented with two situations that present nominally similar choices and potential consequences (Foot 1967; Kamm 1989; Kamm 2007; Otsuka 2008; Parfit 2011; Thompson 1976; Thompson 1985; Unger 1996). Situation A (known as Switch), is where a runaway trolley is driving down a track and will run into and kill five workmen unless an observer flips a switch and diverts the train down a sidetrack that will only kill one workman. Situation B (known as Bridge) has the observer crossing over a bridge, where she sees that the five people will be killed unless she pushes a rather large and plump individual off off the bridge onto the tracks below, thereby stopping the train and saving the five. Most philosophers agree that it is morally permissible to kill the one in Switch, but others (including most laypeople) think that it is impermissible to push the plump person in Bridge (Kagan 1989). The case has the same effects: kill one to save the five. This discrepancy in intuition has led to much spilled ink over “the problem” and led to an entire enquiry in “Trolleyology.”

Applied to autonomous cars, at first glance, the Trolley Problem seems like a natural fit. Indeed, it could be the sine qua non ethical issue for philosophers, lawyers, and engineers alike. However, if I am correct, the introduction of the Trolley is more like a death knell of any serious conversation about ethics and autonomous cars. The Trolley Problem detracts from understanding about how autonomous cars actually work, how they “think,” how much influence humans have over their decision-making processes, and the real ethical issues that face those advocating the advancement and deployment of autonomous cars in cities and towns the globe over. Instead of thinking about runaway trolleys and killing bystanders, we should have a better grounding in the technology itself. Once we have that, then we see how new—more complex and nuanced—ethical questions arise. Ones that look very little like trolleys.

I argue that we need to understand that autonomous vehicles (AVs) will be making sequential decisions in a dynamic environment under conditions of uncertainty. Once we understand that the car is not a human, and that the decision is not a single-shot, black and white one, but one that will be made at the intersection of multiple overlapping probability distributions, we will see that “judgments” about what courses of action to take are going to be not only computationally difficult, but highly context dependent and, perhaps, unknowable by a human engineer a priori. Once we can disabuse ourselves of thinking the problem is an aberrant one-off ethical dilemma, we can begin to interrogate the foreseeability of other types of ethical and social dilemmas.

The Trolley Problem detracts from understanding about how autonomous cars actually work, how they “think,” and the real ethical issues that face those advocating the advancement and deployment of autonomous cars.

This paper is organized into three parts. Part one argues that we should look at one promising area of robotic decision theory and control: Partially Observed Markov Decision Processes (POMDPs) as the most likely mathematical model an autonomous vehicle will use.1 After explaining what this model does and looks like, section two argues that the functioning of such systems does not comport with the assumptions of the Trolley Problem. This entails, then, that viewing ethical concerns about AVs from this perspective is incorrect and will blind us to more pressing concerns. Finally, part three argues that we need to interrogate what we decide are the objective and value functions of these systems. Moreover, we need to make transparent how we use the mathematical models to get the systems to learn and to make value trade-offs. For it seems absurd to place an AV as the bearer of a moral obligation not to kill anyone, but it is not absurd to interrogate the engineers who chose various objectives to guide their systems. I conclude with some observations of the types of ethical problems that will arise with AVs, and none have the form of the Trolley.

I. It’s all about the POMDP

A Partially Observed Markov Decision Process (POMDP) is a variant of a Markov Decision Process (MDP). The MDP model is a useful mathematical model for various control and planning problems in engineering and computing. For instance, the MDP is useful in an environment that is fully observable, has discrete time intervals, and few choices of action in various conditions (Puterman 2005). We could think of an MDP model being useful in something like a game of chess or tic-tac-toe. The algorithm knows the environment fully (the board, pieces, rules) and waits for its opponent to make a move. Once that move is made, the algorithm can calculate all the potential moves it has in front of it, discounting for future moves and then taking the “best” or “optimal” decision to counter.

Unfortunately, many real-world environments are not like tic-tac-toe or chess. Moreover, when we have robotic systems, like AVs, even with many sensors attached to them, the system itself cannot have complete knowledge of its environment. There is incomplete knowledge due to limitations in the range and fidelity of the sensors, occlusions, and latency (the time it takes to process the sensor readings, the continuous, dynamic environment will have changed.) Moreover, a robot in this situation makes a decision using the current observations as well as a history of previous actions and observations. In more precise terms, a system is measuring everything it can at a particular state (s), and the finite set of states S = {s1, …, sn} is the environment. When a system observes itself in s, and takes an action a, it then transitions to a new state, s’, and can take action a2 (s’, a2). The set of possible actions is A = {a1, …, ak}. Thus at any given point, a system is deciding which action to take based on its present state, its prior state (if there is one), and its expected future transitioned state. The crucial difference here is that a POMDP is working in an environment where the system (or agent) has incomplete knowledge and uncertainty, and is working from probabilities; in essence, an AV is not working from an MDP, it would more than likely be working from a POMDP.

How does a system know which action to take? There are a variety of potential ways, but I will focus on one here: reinforcement learning. For a POMDP using reinforcement learning, the autonomous vehicle learns through a receipt of some reward signal. Systems like this that use POMDPs have reward (or sometimes ‘cost’) signals that tell them which actions to pursue (or avoid). But these signals are based on probability distributions of which acts in the current state, s, will lead to more rewards in a future state, sn, discounted for future acts. Let’s take an example here to explain in easier terms. I may reason that I am tired but need to finish this paper (state), so I could decide to take a nap right now (an act in my possible set of actions), and then get up later to finish the paper. However, I also do not know if I will have restful sleep, oversleep, or feel worse when I get up, thereby frustrating my plans for my paper even more, though the thought of an immediate nap could give me an immediate reward (sleep and rest). The optimal decision (or, in POMDP speak, “policy”), could be to sleep right now. However, that is not actually correct. Rather, the POMDP requires that one pick the most optimal policy under conditions of uncertainty, for sequential decisions tasks, and discounted for future reward states. In short, the optimal policy would be for me to instead grab a cup of coffee, finish my paper, and then go to bed early because I will actually maximize my total amount of sleep by not napping during the day and getting my work done quickly.

Yet the learning really only takes place once a decision is made and there is feedback to the system. In essence, there is a requirement of some sort of system memory about past behavior and reward feedback. How robust this memory needs to be is specific to tasks and systems, but what we can say at a general level is that the POMDP needs to have a belief about its actions based on posterior observations and their corresponding distributions.2 In short, the belief “is a sufficient statistic of the history,” without the agent actually having to remember every single possible action and observation (Spaan 2012).

Yet to build an adequate learning system, there is a need for many experiences to build robust beliefs. In my case of fatigue and academic writing, if I did not have a long history of experiences with writing academic papers and fatigue, one may think that whatever decision I take is at best considered a random one (that is, its 50/50 as to whether I act on my best policy). Yet, since I have a very long history of academic paper writing and fatigue, as well as the concomitant rewards and costs of prior decisions, I can accurately predict which action will actually maximize my reward. I know my best policy. This is because policies, generally, map beliefs to actions (Sutton and Barto 1998; Spaan 2012).3 Yet mathematically, what we really mean is that a policy, ∏, is a continuous set of probability distributions over the entire set of states (S). And an optimal policy is that policy that maximizes my rewards.

That function then becomes a value function, that is, a function of how an agent’s action at its initial belief state (b0), populates its expected reward return once it receives feedback and updates its beliefs about various states and observations, and thus it improves on its policy. Then it continues this pattern, again and again and again, until it can begin to better predict what acts will maximize its reward. Ostensibly, this structure permits a system to learn how to act in a world that is uncertain, noisy, messy, and not fully observable. The role of the engineer is to define the task or objective in such a way that the artificial agent following a POMDP model can take a series of actions and learn which ones correspond with the correct observations about its state of the world and act accordingly, despite uncertainty.

The AV world will undoubtedly make greater use of POMDPs in their software architectures. While there is a myriad of computational techniques to choose from, probabilistic ones like POMDPs have proven themselves amongst the leading candidates to build and field autonomous robots. As Thrun (2000) explains, “the probabilistic approach […] is the idea of representing information through probability densities” around the areas of perception and control, and “probabilistic robotics has led to fielded systems with unprecedented levels of autonomy and robustness.” For instance, recently Cunningham et al. used POMDPs to create a multi-policy decision-making process for autonomous driving that estimated when to pass a slower vehicle as well as how to merge into traffic taking into account driving preferences such as reaching goals quickly and rider comfort (Cunningham et al. 2015).

It is well known, however, that POMDPs are computationally inefficient and that as the complexity of a problem increases, some problems may be intractable. To account for this, most researchers make certain assumptions about the world and the mathematics to make the problems computable, or they use heuristic approximations to ease policy searches (Hausrecht 2000). Yet when one approximates, in any sense, there is no guarantee that a system will act in a precisely optimal manner. However we manipulate the mathematics, we pay a cost in one domain or another: either computationally or for the pursuit of the best outcomes.

There is no guarantee that something completely novel and unforeseen will not confuse an AV or cause it to act in unforeseeable ways.

The important thing to note in all of this discussion is that any AV that is running on a probabilistic method like a set or architecture of POMDPs is going to be doing two things. First, it is not making “decisions” with complete knowledge, such as in the Trolley case. Rather, it is choosing a probability of some act changing a state of affairs. In essence, decisions are made in dynamic conditions of uncertainty. Second, this means that for AVs controlled by a learning system to operate effectively, substantial amounts of “episodes” or training in various environments under various similar and different situations are required for them to make “good” decisions when they are fielded on a road. That is, they need to be able to draw from a very rich history of interactions to extrapolate out from them in a forward-looking and predictive way. However, since they are learning systems, there is no guarantee that something completely novel and unforeseen will not confuse an AV or cause it to act in unforeseeable ways. In the case of the Trolley, we can immediately begin to see the differences between how a bystander may reason and how an AV does.

II. The follies of trolleys

Patrick Lin (2017) defends the use of the Trolley Problem as an “intuition pump” to get us to think about what sorts of principles we ought to be programming into AVs. He argues that using thought experiments like this “isolates and stress-tests a couple of assumptions about how driverless cars should handle unavoidable crashes, as rare as they might be. It teases out the questions of (1) whether numbers matter and (2) whether killing is worse than letting die.” Additionally, he notes that because driverless cars are the creations of humans over time, “programmers and designers of automated cars […] do have the time to get it right and therefore bear more responsibility for bad outcomes,” thereby bootstrapping some resolution to whether there was sufficient intentionality for the act to be judged as morally right or wrong.

While I agree with Lin’s assessment that many cases in philosophy are not designed for real-world scenarios, but to isolate and press upon our intuitions, this does not mean that they are well suited for all purposes. As Peter Singer notes, reducing “philosophy…to the level of solving the chess puzzle” is rather unhelpful, for “there are things that are more important” (Singer 2010). We need to take special care to see the asymmetries between cases like the Trolley Problem and algorithms that are not moral agents but make morally important decisions. The first and easiest way to see this is to acknowledge that an AV utilizing something like a POMDP in a dynamic environment is not making a decision at one point in time but is making sequential decisions. It is making a choice based on a set of probability distributions about what act will give it the highest reward function (or minimize the most cost) based upon prior knowledge, present observations and likely future states. Unlike the Trolley cases, where there is one decision to make at one point in time, this is not how autonomous cars actually operate.

We need to take special care to see the asymmetries between cases like the Trolley Problem and algorithms that are not moral agents but make morally important decisions.

Second, and more bluntly, we’d have to model trolley-like problems in a variety of situations, and train them in those situations (or episodes) hundreds, maybe thousands, of times to get the system to learn what to do in that instance. It would not just magically make the “right” decision in that instance because the math and the prior set of observations would not, in fact could not, permit it to do so. We actually have to pre-specify what “right” is for it to learn what to do. This is because the types of algorithms we use are optimizing by their nature. They want to find the most optimal strategy to maximize its reward function, and this learning, by the way, means that it needs to make many mistakes.4 For instance, one set of researchers at Carnegie Mellon University refused to use simulations to teach an autonomous aerial vehicle to fly and navigate. Rather, they allowed it to crash, over 11,500 times, to learn simple self-supervised policies to navigate (Gandhi, Pinto and Gupta 2017). Indeed this learning-by-doing is exactly what much of the testing of self-driving cars in real road conditions is also directed towards: real life learning and not mere simulations. Yet we are not asking the cars to go crashing into people or to choose whether it is better to kill five men or five pregnant women.5

Moreover, even if one decided to simulate these Trolley cases again and again, and diversify them to some sort of sufficient degree, we must acknowledge the simple but strict point that unless one already knows the right answer, the math cannot help. Also, I am hard pressed to find any philosophers who all agree on the one way of living and the correct moral code for over 2,000 years, or even to find agreement on what to do in the Trolley Problem.6 What is even worse is that if we take the opinion that our intuitions ought to guide us in finding data for these moral dilemmas, we will not in fact, find reliable data. This can easily be seen with two simple examples to show how people do not in fact act consistently: the Allais Paradox and the Ellsberg Paradox. Both of these paradoxes challenge the basic axioms that Von Neumann and Morgenstern (1944) posited for their theory of expected utility. Expected utility theory basically states that people will choose an outcome based on whether that outcome’s expected utility is higher than all other potential outcomes. In short, it means people are utility maximizers.7 In the Allais Paradox, we find that in a given experiment people fail to actually act consistently to maximize their utility (or achieve preference satisfaction) and thus they violate the substitution axiom of the theory. (Allais 1953). In the Ellsberg Paradox, people end up choosing when they cannot actually infer the probabilities that will maximize their preferences; thus violating the axioms of completeness and monotonicity (Ellsberg 1961).

One may object here and claim that utilitarianism is not the only moral theory, and that we do not in fact want utility maximizing self-driving cars. We’d rather have cars that respect rights and lives, more akin to a virtue ethics or deontological approach to ethics. But if that is so, then we have done away with the need for Trolley Problems at the outset. It is impermissible to kill anyone if that is true, despite the numbers. Or we merely state a priori that lesser evil justifications win the day, and thus in extremis we have averted the problem (Kamm 2007; Frowe 2015). Or, if we grant that self-driving cars, relying on a mathematical descendent from classic act utilitarianism, end up calculating as an act utilitarian would, then there appears to be no problem–the numbers win the day. Wait, wait, one responds, this is all too quick. Clearly we feel that self-driving cars ought not to kill anyone, the Trolley Problem stands, and they still might find themselves in situations where they have no choice but to kill someone, so who ought it to be?

Here again, I cite that we are stuck in an uncomfortable position vis a vis the need for data and training versus the need to know what morality dictates for us as true: do we want to model moral dilemmas or do we want to solve them? If it is the former, we can do this indefinitely. We can model moral dilemmas and ask people to partake in experiments, but that only tells us the empirical reality of what those people think. And that may be a significantly different answer than what morality dictates one ought do. If it is the latter, I am still extremely skeptical that this is the right framework to be discussing the ethical quandaries that arise with self-driving cars. Perhaps the Trolley Problem is nothing more than an unsolvable distraction from the question of safety thresholds and other types of ethical questions regarding the second or third order effects of automotive automation in society.8

Indeed, if I am correct, then the entire set up of a moral dilemma for a non-moral agent to “choose” the best action is a false choice because there is no one choice that any engineer could foreseeably plan for. What is more, even if the engineer were to exhibit enough foresight and build a learning system that could pick up on subtle clues from interactions with the environment and mass amounts of data, this assumes that we have actually figured out which action is the right one to take! We’ve classified data as “good” or “bad” and fed that to a system. Yet we haven’t as human moral agents decided this at all, for there is debate in each situation about what one ought to do, as well as uncertainty. Trolley Problems are constructed in such a way where the agent has one choice, and knows with certainty what will happen should she make that choice. Moreover, that choice is constructed as a dilemma: it appears that no matter what choice she makes, she will end up doing some form of wrong. Under real-world driving conditions, this will rarely in fact be the case. And if we attempt to find a solution through the available technological means, all we have done is to show a huge amount of data to the system and have it optimize its behavior for the task it has been assigned to complete. If viewed in this way, modeling moral dilemmas as tasks and optimization appears morally repugnant.

More importantly for our purposes here, we must be explicitly clear that AI is not human. Even if an AI were a moral agent (and we agreed on what that looked like), the anthropomorphism presumed in the AV Trolley case is actually blinding us to some of the real dangers. For in classic moral philosophy Trolley cases, we assume from the outset that there is: (i) a moral agent confronted with the choice; (ii) that this moral agent is self-aware with a history of acting in the world, understands concepts, and possesses enough intelligence to contextually identify when trifle constraints are trumped by significant moral ones and (iii) the intelligence can, in some sense, balance or measure seemingly (or truly) incommensurable goods and conflicting obligations. Moreover, as Barbara Fried (2012) summarizes about the structure of Trolley Problems:

The hypotheticals typically share a number of features beyond the basic dilemma of third party harm/harm tradeoffs. These include the consequences of the available choices are stipulated to be known with certainty ex ante; that the actors are all individuals (as opposed to institutions); that the would-be victims (of the harm we impose by our actions or allow to occur by our inaction) are generally identifiable individuals in close proximity to the would-be actor(s); and that the causal chain between act and harm is fairly direct and apparent. In addition, actors usually face a one-off decision about how to act. That is to say, readers are typically not invited to consider the consequences of scaling up the moral principle by which the immediate dilemma is resolved to a large number of (or large-number) cases.

Yet not only are all the attributes noted above well beyond the present day capabilities of any AI systems, the situation in which an AV operates fails to comport with any and all of the assumptions in trolley-like cases (Roff 2017). There is a disjuncture between saying that humans will “program” the AV to make the “correct” moral choice, thereby bootstrapping the Trolley case to AVs, and between claiming that an AV is a learning automata that is sufficiently capable to make morally important decisions on the order of the Trolley problem. Moreover, we cannot just throw up our hands and ask what the best consequences will render, for in that case there is no real “problem” at issue: if one is a consequentialist, save the five over the one, no questions asked.

It is unhelpful to continue to focus and to insist that the Trolley Problem exhausts the moral landscape for ethical questions with regard to AVs and their deployment.

It is unhelpful to continue to focus and to insist that the Trolley Problem exhausts the moral landscape for ethical questions with regard to AVs and their deployment. All AIs can do is to bring into relief existing tensions in our everyday lives that we tend to assume away. This may be due to our human limitations in seeing underlying structures and social systems because we cannot take in such large amounts of data. AI, however, is able to find novel patterns in large data and plan based on that data. This data may reflect our biases, or it may simply be an aggregation of whatever situations the AV has encountered. The only thing AVs require of humans is to make explicit what tasks we require of it and what the rewards and objectives are; we do not require the AV to tell us that these are our goals, rewards and objectives. Unfortunately, this distinction is not something that is often made explicit.

Rather, the debate often oscillates between whether the human agents ought to “program” the right answer, or whether the learning system can in fact make the morally correct answer. If it is the former, we must admit it ignores the fact that a learning system does not operate in such straightforward terms. It is a learning system that will be constrained by its sensors, its experience, and the various architectures and sub-architectures of its system. But it will be acting in real time, away from its developers, and in a wide and dynamic environment, and so the human(s) responsible for its behavior will have, at best, mediated and distanced (if any) responsibility for that system’s behaviors (Matthais 2004).

If it is the latter, I have attempted to show here that the learning AV does not possess the types of qualities relevant for the aberrant Trolley case. Even if we were to train it on large amounts and variants of the Trolley case, there will always be situations that can arise that may not produce the estimated or intended decision by the human. This is simple mathematics. For one can only make generalizations about behaviors—particularly messy human behaviors that may give rise to AV Trolley like cases—when there is a significantly large dataset (we call this the law of large numbers). Unfortunately, this means that there is no way to know what one individual data point (or driver) will do in any given circumstance. And the Trolley case is always the unusual individual outlier. So, while there is value to be found in thinking through the moral questions related to Trolley-like cases, there are also limits to it as well, particularly with regard to decision weights, policies and moral uncertainty.

III. The value functions

If we agree that the Trolley Problem offers little guidance on the wider social issues at hand, particularly the value of a massive change and scientific research, then we can begin to acknowledge the wide-ranging issues that society will face with autonomous cars. As Kate Crawford and Ryan Calo (2016) explain, “autonomous systems are [already] changing workplaces, streets and schools. We need to ensure that those changes are beneficial, before they are built further into the infrastructure of everyday life.” In short, we need to identify the values that we want to actualize through the engineering, design and deployment of technologies, like self-driving cars. There is thus a double entendre at work here: we know that the software running these systems will be trying to maximize their value functions, but we also need to ensure that they are maximizing society’s too.

We know that the software running these systems will be trying to maximize their value functions, but we also need to ensure that they are maximizing society’s too.

So what are the values that we want to maximize with autonomous cars? Most obviously, we want cars to be better drivers than people. With over 5.5 million crashes per year and over 30,000 deaths in the U.S. alone, safety appears to be the primary motivation for automating driving. Over 40 percent of fatal crashes involve “some combination of alcohol, distraction, drug involvement and/or fatigue” (Fagnant and Kockelman 2015). That means that if everyone were using self-driving vehicles, at least in the U.S., there could be at least a 12,000 reduction in fatalities per year. Ostensibly, saving life is a paramount value.9

But exactly how this occurs, as well as the attendant effects of policies, infrastructure choices, and technological development are all value loaded endeavors. There is not simply an innovative technological “fix” here. We cannot “encode” ethics and wash our hands of it. The innovation, rather, needs to come from the intersection of humanities, social sciences, and policy, working alongside engineering. This is because the values that we want to uphold must first be identified, contested, and understood. Richard Feynman famously said, “I cannot create that which I do not understand.” Meaning, we cannot create, or perhaps better, recreate those things of which we are ignorant.

Indeed, I would go so far as to push Crawford and Calo in their use of the word “infrastructure” and suggest that it is in fact the normative infrastructure that is of greatest importance. Normative here has two meanings that we ought to keep in mind: (i) the philosophical or moral “ought;” and (ii) the Foucauldian “normalization” approach that identifies norms as those concepts or values that seek to control and judge our behavior (Foucault 1975). These are two very different notions of “normative,” but both are crucially important for the identification of value and the creation of value functions for autonomous technologies.

From the moral perspective, one must be able to identify all those moral values that ought to be operationalized in not merely the autonomous vehicle system, but the adjudication methods that one will use when these values come into conflict. This is not, some might think, a return to the Trolley Problem. Rather, it is a technological value choice on how one decides to design a system to select a course of action. In multi-objective learning systems, there often occur situations where objectives (that is tasks or behaviors to accomplish) conflict with one another, are correlated, or even endogenous. The engineer must design a way of finding a way of prioritizing particular objectives or creating a system for tradeoffs, such as whether to conserve energy or to maintain comfort (Moffaert and Nowé 2014).10 How they do so is a matter of mathematics, but it is also a choice about whether they are privileging particular kinds of mathematics that in turn privilege particular kinds of behaviors (such as satisficing).

Additionally, shifting focus away from tragic and rare events like Trolley cases, allows us to open up more systemic and “garden variety” problems that we need to consider for reducing harm and ensuring safety. As Allen Wood (2011) argues, most people would never have to face a Trolley case if there were safer trollies, the inability for passersby to have access to switches and good signage to “prevent anyone from being in places where they might be killed or injured by a runaway train or trolley.” In short, we need to think about use, design, and interaction for the daily experience for consumers, users, or bystanders of the technology. We must understand how AVs could change the design, layout and make-up of cities and towns, and what effects those may have on everything from access to basic resources to increasing forms of inequality.

From the Foucauldian perspective, things become slightly more interesting, and this is where I think many of the ethical concerns begin to come into view. The norms that govern how we act, the assumptions we make about the appropriateness of the actions or behaviors of others, and the value that we assign to those judgments is here a matter of empirical assessment (Foucault 1971; Foucault 1975). For instance, public opinion surveys are conduits telling us what people “think” about something. Less obvious, however, are the ways in which we subtly adjust our behavior without speaking or, in some instances, even thinking from cultural and societal cues. These are the kinds of norms Foucault is concerned about. These kinds of norms are the ones that show up in large datasets, in biases, in “patterns of life.” And it is these kinds of norms, which are the hardest ones to identify, that are the stickiest ones to change.

How this matters for autonomous vehicles lays in the assumptions that engineers make about human behavior, human values, or even what “appropriate” looks like. From a value-sensitive design (VSD) standpoint, one may consider not only the question of lethal harm to passengers or bystanders, but a myriad of values like privacy, security, trust, civil and political rights, emotional well-being, environmental sustainability, beauty, social capital, fairness, and democratic value. For VSD seeks to encapsulate not only the conceptual aspects of the values a particular technology will bring (or affect), but also how “technological properties and underlying mechanisms support or hinder human values” (Friedman, Kahn, Borning 2001).

But one will note that in all of these choices, Trolley Problems have no place. For instance, many of the social and ethical implications of AVs can be extremely subtle or simply obvious. Note the idea recently circulated by the firm Deloitte: AVs will impact retail and goods delivery services (Deloitte 2017). As it argues, retailers will attempt to use AVs to increase catchment areas, provide higher levels of customer service by sending cars to customers, cut down on delivery time, or act as “neighborhood centers” akin to a mobile corner store that delivers goods to one’s home. In essence, retailers can better cater to their customers and “nondrivers are not ‘forced’ to take the bus, subway, train or bike anymore […] and this will impact footfall and therefore (convenience) stores” (Deloitte 2017).

Yet this foreseen benefit from AVs may only apply to already affluent individuals living in reasonable proximity to affluent retail outlets. It certainly will struggle to find economic incentives in “food deserts” where low-income individuals without access to transport live at increasingly difficult distances from supermarkets or grocery stores (Economic Research Service U.S. Department of Agriculture 2017). Given that these individuals currently do not possess transport and suffer from lack of access to fresh foods and vegetables does not bode well for their ability to afford to pay prices for automation and delivery, or perhaps increased prices for the luxury of being ferried to and fro. This may in effect have more deleterious effects on poverty and the widening gap between the rich and poor, increasing rather than decreasing the areas now considered as “food deserts.”

To be sure, there is much speculation about how AVs will actually provide net benefits for society.

To be sure, there is much speculation about how AVs will actually provide net benefits for society. Many reports, from a variety of perspectives, estimate that AVs will ensure that all the parking garages are turned into beautiful parks and garden spaces (Marshall 2017), and the elderly, disabled and vulnerable have access to safe and reliable transport (Madia 2017, Anderson 2014, West 2016, Bertoncello and Wee 2015, UK House of Lords 2017). But less attention appears to be paid to how the present limitations of the technology will require substantial reformulation to urban planning, infrastructure, and the lives and communities of those around (and absent from) AVs. Hyper-loops for AVs, for example, may require pedestrian overpasses, or, as one researcher suggests, “electric fences,” to keep pedestrians from crossing at the street level (Scheider 2017). Others suggest that increased adoption of AVs will need to be cost and environmentally beneficial, so they will need to be communal and operated in larger ride shares (Small 2017). If this is so, then questions about the presence of surveillance and intervention for potential crimes, harassment, or other offensive behavior would seem to arise.

All of these seemingly small side or indirect effects of AVs will normalize usage, engender rules of behavior, systems of power, and place particular values over others. In the Foucauldian sense, the adoption and deployment of the AV will begin to change the organization and construction of “collective infrastructure” and this will require some form of governmental rationality—an imposition of structures of power—on society (Foucault 1982). For this sort of urban planning, merely to permit the greater adoption of AVs is a political choice; it will enable a “certain allocation of people in space, a canalization of their circulation, as well as the coding of their reciprocal relations” (Foucault 1982). Thus making these types of decisions transparent and apparent to the designers and engineers of AVs will help them to see the assumptions that they make about the world and what they and others value in it.

Conclusion

Ethics is all around us because it is a practical activity for human behavior. From all of the decisions that humans make, from the mundane to the morally important, there are values that affect and shape our common world. If viewed from this perspective, humans are constantly engaging in a sequential decision-making problem, trading off values all the time—not making one-off decisions intermittently. As I have tried to argue here, thinking about ethics in this one-shot problem, extremely tragic case scenario, is unhelpful at best. It distracts us from identifying the real safety problems and value tradeoffs we need to be considering with the adoption of new technologies throughout society.

In the case of autonomous vehicles, we ought to consider how the technology actually works, how the machine actually makes decisions. Once we do this, we see that the application of Trolleyology to this problem is not only a distraction, it is a fallacy.

In the case of AVs, we ought to consider how the technology actually works, how the machine actually makes decisions. Once we do this, we see that the application of Trolleyology to this problem is not only a distraction, it is a fallacy. We aren’t looking at the tradeoffs correctly, for we have multiple competing values that may be incommensurable. It is not whether a car ought to kill one to save five, but how the introduction of the technology will shape and change the rights, lives, and benefits of all those around it. Thus, the set-up of a Trolley Problem for AVs ought to be considered a red herring for anyone considering the ethical implications of autonomous vehicles, or even AI generally, because the aggregation of goods/harms in Trolley cases doesn’t travel to the real world in that way. They fail to scale, they are incommensurate, they are ridden with uncertainty and causality is fairly tricky when we want to consider second- and third-order effects. Thus, if we want to get serious about ethics and AVs, we need to flip the switch on this case.

Roff, Heather M. (2017). “How Understanding Animals Can Help Us to Make the Most out of Artificial Intelligence” The Conversation, 30 March. Available online at: https://theconversation.com/how-understanding-animals-can-help-us-make-the-most-of-artificial-intelligence-74742. Accessed 12 December 2017.

Science and Technology Select Committee, United Kingdom House of Lords (2016-2017). “Connected and Autonomous Vehicles: The Future?” Government of the United Kingdom. Available online at: https://publications.parliament.uk/pa/ld201617/ldselect/ldsctech/115/115.pdf. Accessed 15 January 2018.

Wood, Allen. (2011). “Humanity as an End in Itself” in (Ed.) Darek Parfit, On What Matters, Vol. 2, Oxford: Oxford University Press.

]]>
By Heather M. Roff
Introduction
Often when anyone hears about the ethics of autonomous cars the first thing to enter the conversation is “the Trolley Problem.” The Trolley Problem is a thought experiment where someone is presented with two situations that present nominally similar choices and potential consequences (Foot 1967; Kamm 1989; Kamm 2007; Otsuka 2008; Parfit 2011; Thompson 1976; Thompson 1985; Unger 1996). Situation A (known as Switch), is where a runaway trolley is driving down a track and will run into and kill five workmen unless an observer flips a switch and diverts the train down a sidetrack that will only kill one workman. Situation B (known as Bridge) has the observer crossing over a bridge, where she sees that the five people will be killed unless she pushes a rather large and plump individual off off the bridge onto the tracks below, thereby stopping the train and saving the five. Most philosophers agree that it is morally permissible to kill the one in Switch, but others (including most laypeople) think that it is impermissible to push the plump person in Bridge (Kagan 1989). The case has the same effects: kill one to save the five. This discrepancy in intuition has led to much spilled ink over “the problem” and led to an entire enquiry in “Trolleyology.”
Applied to autonomous cars, at first glance, the Trolley Problem seems like a natural fit. Indeed, it could be the sine qua non ethical issue for philosophers, lawyers, and engineers alike. However, if I am correct, the introduction of the Trolley is more like a death knell of any serious conversation about ethics and autonomous cars. The Trolley Problem detracts from understanding about how autonomous cars actually work, how they “think,” how much influence humans have over their decision-making processes, and the real ethical issues that face those advocating the advancement and deployment of autonomous cars in cities and towns the globe over. Instead of thinking about runaway trolleys and killing bystanders, we should have a better grounding in the technology itself. Once we have that, then we see how new—more complex and nuanced—ethical questions arise. Ones that look very little like trolleys.
I argue that we need to understand that autonomous vehicles (AVs) will be making sequential decisions in a dynamic environment under conditions of uncertainty. Once we understand that the car is not a human, and that the decision is not a single-shot, black and white one, but one that will be made at the intersection of multiple overlapping probability distributions, we will see that “judgments” about what courses of action to take are going to be not only computationally difficult, but highly context dependent and, perhaps, unknowable by a human engineer a priori. Once we can disabuse ourselves of thinking the problem is an aberrant one-off ethical dilemma, we can begin to interrogate the foreseeability of other types of ethical and social dilemmas.
The Trolley Problem detracts from understanding about how autonomous cars actually work, how they “think,” and the real ethical issues that face those advocating the advancement and deployment of autonomous cars.
This paper is organized into three parts. Part one argues that we should look at one promising area of robotic decision theory and control: Partially Observed Markov Decision Processes (POMDPs) as the most likely mathematical model an autonomous vehicle will use.1 After explaining what this model does and looks like, section two argues that the functioning of such systems does not comport with the assumptions of the Trolley Problem. This entails, then, that viewing ethical concerns about AVs from this perspective is incorrect and will blind us to more pressing concerns. Finally, part three argues that we need to interrogate what we decide are the objective and value functions of these systems. Moreover, we ... By Heather M. Roff
Introduction
Often when anyone hears about the ethics of autonomous cars the first thing to enter the conversation is “the Trolley Problem.” The Trolley Problem is a thought experiment where someone is presented ... https://www.brookings.edu/experts/heather-m-roff/Heather M. Roffhttp://webfeeds.brookings.edu/~/587890584/0/brookingsrss/topics/technology~Heather-M-Roff/
Fri, 14 Dec 2018 15:51:57 +0000https://www.brookings.edu/?post_type=expert&p=553110

]]>
By Rachel Slattery

Heather M. Roff is a nonresident fellow in the Foreign Policy program at Brookings. Her research interests include the law, policy, and ethics of emerging military technologies, such as autonomous weapons, artificial intelligence, robotics, cybersecurity, and more recently quantum, as well as international security and human rights protection. Her recent work focuses on generating normative principles for the use of AI for national defense, as well as particular epistemological issues with AI for defense related applications. She is author of “Global Justice, Kant and the Responsibility to Protect” (Routledge 2013), as well as numerous scholarly articles.

Roff received her doctorate in political science from the University of Colorado at Boulder (2010). She is currently a senior research analyst at the Johns Hopkins Applied Physics Lab (APL) in the National Security Analysis Department. Prior to joining APL, she was a senior research scientist at DeepMind, one of the leading artificial intelligence companies, in their ethics & society team. Prior to DeepMind, she was a senior research fellow in the Department of Politics and International Relations at the University of Oxford; was a research scientist in the Global Security Initiative at Arizona State University; and held faculty positions at the Korbel School of International Studies at the University of Denver, the University of Waterloo, and the United States Air Force Academy. She has also held multiple fellowships at New America (2015-17).

She has provided expert testimony and advice regarding lethal autonomous weapons and artificial intelligence to the United Nations Convention on Certain Conventional Weapons and the International Committee of the Red Cross, as well as the United Nations Institute for Disarmament Research, the United Kingdom Ministry of Defense, the Canadian Department of National Defense, and the U.S. Department of Defense.

Moreover, she has received funding awards from the Future of Life Foundation and the Canadian Department of National Defense for her work on meaningful human control, a concept generated with the disarmament NGO Article 36, that calls for structures and limits to the design, development, and deployment of autonomous technologies in armed conflict. “Meaningful human control” has sparked international attention from both scholars, practitioners, and industry.

She blogs for the Huffington Post, the Duck of Minerva, and has written for the Wired Magazine, Bulletin of the Atomic Scientists, Slate, Defense One, the Wall Street Journal, the National Post, and the Globe and Mail. She is currently working on various projects related to the ethics of artificial intelligence for national security and defense.

]]>
By Rachel Slattery
Heather M. Roff is a nonresident fellow in the Foreign Policy program at Brookings. Her research interests include the law, policy, and ethics of emerging military technologies, such as autonomous weapons, artificial intelligence, robotics, cybersecurity, and more recently quantum, as well as international security and human rights protection. Her recent work focuses on generating normative principles for the use of AI for national defense, as well as particular epistemological issues with AI for defense related applications. She is author of Global Justice, Kant and the Responsibility to Protect” (Routledge 2013), as well as numerous scholarly articles.
Roff received her doctorate in political science from the University of Colorado at Boulder (2010). She is currently a senior research analyst at the Johns Hopkins Applied Physics Lab (APL) in the National Security Analysis Department. Prior to joining APL, she was a senior research scientist at DeepMind, one of the leading artificial intelligence companies, in their ethics & society team. Prior to DeepMind, she was a senior research fellow in the Department of Politics and International Relations at the University of Oxford; was a research scientist in the Global Security Initiative at Arizona State University; and held faculty positions at the Korbel School of International Studies at the University of Denver, the University of Waterloo, and the United States Air Force Academy. She has also held multiple fellowships at New America (2015-17).
She has provided expert testimony and advice regarding lethal autonomous weapons and artificial intelligence to the United Nations Convention on Certain Conventional Weapons and the International Committee of the Red Cross, as well as the United Nations Institute for Disarmament Research, the United Kingdom Ministry of Defense, the Canadian Department of National Defense, and the U.S. Department of Defense.
Moreover, she has received funding awards from the Future of Life Foundation and the Canadian Department of National Defense for her work on meaningful human control, a concept generated with the disarmament NGO Article 36, that calls for structures and limits to the design, development, and deployment of autonomous technologies in armed conflict. “Meaningful human control” has sparked international attention from both scholars, practitioners, and industry.
She blogs for the Huffington Post, the Duck of Minerva, and has written for the Wired Magazine, Bulletin of the Atomic Scientists, Slate, Defense One, the Wall Street Journal, the National Post, and the Globe and Mail. She is currently working on various projects related to the ethics of artificial intelligence for national security and defense. By Rachel Slattery
Heather M. Roff is a nonresident fellow in the Foreign Policy program at Brookings. Her research interests include the law, policy, and ethics of emerging military technologies, such as autonomous weapons, artificial ... https://www.brookings.edu/podcast-episode/ai-cybersecurity-and-the-future-of-geopolitics/AI, cybersecurity, and the future of geopoliticshttp://webfeeds.brookings.edu/~/586419376/0/brookingsrss/topics/technology~AI-cybersecurity-and-the-future-of-geopolitics/
Fri, 14 Dec 2018 15:38:14 +0000https://www.brookings.edu/?post_type=podcast-episode&p=552873

]]>
By John Villasenor, Fred Dews

Artificial intelligence is now in every domain of our lives, from commerce to politics, medicine to entertainment, and global trade to geopolitics. In this episode, expert John Villasenor discusses the important intersection of AI, cybersecurity, and geopolitics. Villasenor is a nonresident senior fellow in the Center for Technology Innovation at Brookings and a professor of electrical engineering, public policy, and management, and also a visiting professor of law, at the University of California, Los Angeles.

]]>
By John Villasenor, Fred Dews
Artificial intelligence is now in every domain of our lives, from commerce to politics, medicine to entertainment, and global trade to geopolitics. In this episode, expert John Villasenor discusses the important intersection of AI, cybersecurity, and geopolitics. Villasenor is a nonresident senior fellow in the Center for Technology Innovation at Brookings and a professor of electrical engineering, public policy, and management, and also a visiting professor of law, at the University of California, Los Angeles.
Also in this episode, Senior Fellow Jennifer Vey of the Metropolitan Policy Program introduces the new Anne T. and Robert M. Bass Center for Transformative Placemaking.
Related content:
Artificial intelligence and the future of geopolitics
Weapons of the weak: Russia and AI-driven asymmetric warfare
A Blueprint for the Future of AI
—
Subscribe to Brookings podcasts here or on Apple Podcasts, send feedback email to BCP@Brookings.edu, and follow us and tweet us at @policypodcasts on Twitter.
The Brookings Cafeteria is a part of the Brookings Podcast Network.
By John Villasenor, Fred Dews
Artificial intelligence is now in every domain of our lives, from commerce to politics, medicine to entertainment, and global trade to geopolitics. In this episode, expert John Villasenor discusses the important ... https://www.brookings.edu/research/opportunity-industries/Opportunity Industrieshttp://webfeeds.brookings.edu/~/588089318/0/brookingsrss/topics/technology~Opportunity-Industries/
Fri, 14 Dec 2018 03:21:21 +0000https://www.brookings.edu/?post_type=research&p=553037

]]>
By Chad Shearer, Isha Shah

In recent decades, technological change and the global integration it enables have been rapidly reshaping the U.S. economy. These forces have improved the potential of some individuals to thrive, but diminished prospects for others striving to reach or maintain their place in the American middle class. Amid these changes, how and where will individuals find durable sources of good jobs?

Certainly, education is an important part of the picture, particularly for enabling upward mobility among young people. But tens of millions of adults who are already a critical part of the American workforce also deserve a chance to obtain better jobs, with higher pay and benefits.

This report shows that the industrial structure and growth of metropolitan economies—in particular, whether they provide sufficient numbers of jobs in opportunity industries—matters greatly for workers’ ability to get ahead economically. It examines the presence of occupations and industries in the nation’s 100 largest metropolitan areas that either currently or over time provide workers access to stable middle-class wages and benefits, particularly for the 38 million prime-age workers without a bachelor’s degree.

]]>
By Chad Shearer, Isha Shah
In recent decades, technological change and the global integration it enables have been rapidly reshaping the U.S. economy. These forces have improved the potential of some individuals to thrive, but diminished prospects for others striving to reach or maintain their place in the American middle class. Amid these changes, how and where will individuals find durable sources of good jobs?
Certainly, education is an important part of the picture, particularly for enabling upward mobility among young people. But tens of millions of adults who are already a critical part of the American workforce also deserve a chance to obtain better jobs, with higher pay and benefits.
This report shows that the industrial structure and growth of metropolitan economies—in particular, whether they provide sufficient numbers of jobs in opportunity industries—matters greatly for workers’ ability to get ahead economically. It examines the presence of occupations and industries in the nation’s 100 largest metropolitan areas that either currently or over time provide workers access to stable middle-class wages and benefits, particularly for the 38 million prime-age workers without a bachelor’s degree.
Click here to download detailed data for metro areas »
Interactive by Alec FriedhoffBy Chad Shearer, Isha Shah
In recent decades, technological change and the global integration it enables have been rapidly reshaping the U.S. economy. These forces have improved the potential of some individuals to thrive, but diminished prospects ... https://www.brookings.edu/blog/the-avenue/2018/12/13/what-gms-layoffs-reveal-about-the-digitalization-of-the-auto-industry/What GM’s layoffs reveal about the digitalization of the auto industryhttp://webfeeds.brookings.edu/~/585736382/0/brookingsrss/topics/technology~What-GM%e2%80%99s-layoffs-reveal-about-the-digitalization-of-the-auto-industry/
Thu, 13 Dec 2018 19:04:37 +0000https://www.brookings.edu/?p=552409

]]>
By Mark Muro, Robert Maxim

News that General Motors plans to cut up to 14,800 jobs in the U.S. and Canada was initially reported as a conventional business-cycle adjustment — a “trimming of the sails.” The main causes of the cuts were understood to be slowing demand in the U.S. and China, slumping demand for sedans, and the need to reduce over-capacity in North America.

And then others focused on the community disruption of plant closings in the Rust Belt and how it might be mitigated.

While all of those perspectives are relevant, the most revealing aspect of GM’s announcement may well be what the layoffs say about broader technology trends. GM’s layoffs are not just incremental but existential, in that sense: They are about accelerating the staffing changes mandated by the company’s aggressive transition from analog to digital products and from gasoline to electric power. As such, the new layoffs (and associated future hirings) are likely an augury of much more disruption coming — in the auto sector, for sure, but also in firms all across the economy.

Central to GM’s announcement is, in our view, what we call the “digitalization of everything.” By that, we mean that GM’s layoffs significantly reflect the talent and workforce strains associated with the diffusion of digital and electronic technologies into nearly every industry, business, and workplace in America.

Specifically, the advent of consumer electronics, IT, electric and battery powered drivetrains, and — soon — autonomy in the automotive industry are placing excruciating new demands on its workforce, and forcing painful change. Where once the auto-sector workforce was anchored by workers responsible for mechanical and machine-maintenance roles, the need for electrical skills is now growing exponentially due to the increasing electrical and electronic content of the car. Likewise, where mechanical engineers once predominated, the original equipment manufacturers (OEMs) are increasingly looking for software engineers, energy management experts, and data scientists able to build electric and self-driving vehicles.

Our recent analysis of the digital content of hundreds of occupations in the American economy shows that the digital content of auto work has soared in the last 15 years, with huge implications for workforce development in the sector. The mean digitalization score of workers in the advanced manufacturing sector, of which auto is a part, surged 60%, from 24 to 39 since 2002. This has reoriented the occupational mix of the industry, changing its hiring needs and layoff decisions. As of 2016, for example, the fastest growing occupations in the auto sector were computer network support specialists and software developers while two of the fastest shrinking were drilling and boring machine operators and sheet metal workers. Similar patterns of cutting and hiring are visible in last week’s announcement.

Nor is that all: Look for more of the same in the future — from GM, and from all other companies in the sector. According to our calculations, employing task-level work assessments provided by the McKinsey Global Institute, nearly 65% of all auto-sector jobs have task-level automation potentials of at least 70% in the next 10 or 15 years, meaning they are potentially susceptible to significant work changes, if not termination. With that said, as one of GM’s statements last week noted, “GM’s transformation also includes adding technology and engineering jobs to support the future of mobility, such as new jobs in electrification and autonomous vehicles.”

In that vein, last week’s layoffs surely were a response to changing near-term market conditions. But beyond that, the cuts went much deeper, to respond to massive, technology-driven changes in the nature of the work at hand.

As to what needs to be “done” about these transitions, the proper response almost certainly bears no resemblance to any of the ideas President Trump offered last week. Trump is fuming at the plant closures, and appears to want to reverse the actions GM is taking to stay ahead of emerging technology and skills changes. To that end, Trump called on GM to close one of its plants in China. And he threatened to strip the company of modest federal incentives to stimulate electric car production. However, that would only hurt GM’s and America’s competitiveness by hindering the company’s plans to invest more in the technology and people needed to produce electric and self-driving cars as those become viable products.

What should be done instead? As a nation, we should be embracing transformative technology and its widespread deployment whether it be electrification and hyper-efficient batteries in the auto sector or automation and AI more broadly. Likewise, we should be increasing our investments in education and workforce training (and re-training), with a focus on digital skills. Only in that way will workers be able to ride out the coming waves of tech-driven staffing changes. And finally, the nation needs to do much more to provide basic supports for people and places struggling with the harsh impacts of labor market change. To be sure, workers must adapt, but firms, governments, and regions all have a responsibility to help.

All of which is to say: GM’s announcement of layoffs last week is much more than a routine course-adjustment by a company alert to market softening after a good run. Rather, it’s a wake-up call about the labor market implications of the “digitalization of everything.”

]]>
https://www.brookings.edu/wp-content/uploads/2018/12/2018.12.11_metro_Muro-Maxim_Automation_HBR-related.jpeg?w=270By Mark Muro, Robert Maxim
News that General Motors plans to cut up to 14,800 jobs in the U.S. and Canada was initially reported as a conventional business-cycle adjustment — a “trimming of the sails.” The main causes of the cuts were understood to be slowing demand in the U.S. and China, slumping demand for sedans, and the need to reduce over-capacity in North America.
Then the story turned political, as President Trump lashed out at GM while some observers framed the news as a blow to the president’s promises to bring jobs back to the U.S. heartland.
And then others focused on the community disruption of plant closings in the Rust Belt and how it might be mitigated.
While all of those perspectives are relevant, the most revealing aspect of GM’s announcement may well be what the layoffs say about broader technology trends. GM’s layoffs are not just incremental but existential, in that sense: They are about accelerating the staffing changes mandated by the company’s aggressive transition from analog to digital products and from gasoline to electric power. As such, the new layoffs (and associated future hirings) are likely an augury of much more disruption coming — in the auto sector, for sure, but also in firms all across the economy.
Central to GM’s announcement is, in our view, what we call the digitalization of everything.” By that, we mean that GM’s layoffs significantly reflect the talent and workforce strains associated with the diffusion of digital and electronic technologies into nearly every industry, business, and workplace in America.
Specifically, the advent of consumer electronics, IT, electric and battery powered drivetrains, and — soon — autonomy in the automotive industry are placing excruciating new demands on its workforce, and forcing painful change. Where once the auto-sector workforce was anchored by workers responsible for mechanical and machine-maintenance roles, the need for electrical skills is now growing exponentially due to the increasing electrical and electronic content of the car. Likewise, where mechanical engineers once predominated, the original equipment manufacturers (OEMs) are increasingly looking for software engineers, energy management experts, and data scientists able to build electric and self-driving vehicles.
Our recent analysis of the digital content of hundreds of occupations in the American economy shows that the digital content of auto work has soared in the last 15 years, with huge implications for workforce development in the sector. The mean digitalization score of workers in the advanced manufacturing sector, of which auto is a part, surged 60%, from 24 to 39 since 2002. This has reoriented the occupational mix of the industry, changing its hiring needs and layoff decisions. As of 2016, for example, the fastest growing occupations in the auto sector were computer network support specialists and software developers while two of the fastest shrinking were drilling and boring machine operators and sheet metal workers. Similar patterns of cutting and hiring are visible in last week’s announcement.
Nor is that all: Look for more of the same in the future — from GM, and from all other companies in the sector. According to our calculations, employing task-level work assessments provided by the McKinsey Global Institute, nearly 65% of all auto-sector jobs have task-level automation potentials of at least 70% in the next 10 or 15 years, meaning they are potentially susceptible to significant work changes, if not termination. With that said, as one of GM’s statements last week noted, “GM’s transformation also includes adding ... By Mark Muro, Robert Maxim
News that General Motors plans to cut up to 14,800 jobs in the U.S. and Canada was initially reported as a conventional business-cycle adjustment — a “trimming of the sails.https://www.brookings.edu/blog/up-front/2018/12/13/the-robots-are-coming-lets-help-the-middle-class-get-ready/The robots are coming. Let’s help the middle class get ready.http://webfeeds.brookings.edu/~/585514954/0/brookingsrss/topics/technology~The-robots-are-coming-Let%e2%80%99s-help-the-middle-class-get-ready/
Thu, 13 Dec 2018 13:00:51 +0000https://www.brookings.edu/?p=552317

]]>
By Harry J. Holzer

Are U.S. workers now threatened by a new and powerful form of automation that could displace tens of millions from their current jobs and dislodge them from the middle class? If so, are college-educated or professional workers at the upper range of the middle class as much threatened as those with fewer such credentials at the lower end? And can policy do much to protect the middle class status of either group?

Old fears, new trends

The fear that automation will eliminate millions of jobs, leaving masses of workers jobless, has periodically emerged in industrialized countries at least since the Luddites first made that claim in Britain in the mid-19th century. In the US, such fears occasionally surface as well, as they did during a brief “automation scare” in the late 1950’s and early 1960’s, when a wide swath of workers felt some risk of displacement.

To date, these fears have never proven accurate in any industrial country. New jobs always emerge to replace those that have been lost. This is true because automation raises worker productivity and reduces the costs and prices of goods and services, which makes consumers richer. They can now afford to buy more products than before, which then creates new jobs for workers to fill.

But there are costs – even for the middle class

The adjustment process I describe above does not mean that no one suffers from automation. Some workers are directly displaced from their existing jobs; perhaps they can retrain for another job in the same firm or industry, and perhaps not. Most in the latter situation become unemployed, and suffer lengthy spells without work – sometimes for years – before accepting new jobs at lower wages or leaving the work force altogether.

Displaced workers who are older or less educated are more likely to leave the labor force rather than retrain for another job. For these workers, the thought of returning to a 2- or 4-year college to learn a new skill, or to start a low-wage entry level job as a trainee, is extremely unappealing – and may not be worth it if they only have a decade or two left to work. Those who were unsuccessful at school earlier in their lives, and emerged with at most a high school diploma, are especially poor candidates for more education later. And facing the prospect of nothing but low-wage work for the rest of their lives can discourage them from ever taking another job.

And automation can hurt workers beyond those directly displaced. Since 1980, and perhaps well before, economists believe new technologies have been “skill-biased” – meaning that they substitute for less-educated workers broadly in the labor market, reducing the demand they face for their labor and thus reducing their wages and employment rates. In contrast, those with at least some postsecondary education, especially with bachelor (BA) degrees or higher, tend to complement the new technologies in a variety of ways – as engineers or technicians, or those who market and sell the new products, or those providing the health care that we demand with our higher incomes, or whose creativity in music or writing can now be enjoyed by vastly greater audiences. The employment rates and earnings of these complementary workers rise as a result of automation.

Indeed, most labor economists believe that skill-biased technical change (SBTC) has been the largest cause of growing inequality between college-educated and other workers in the past four decades. Other forces have mattered as well – like globalization and weakening protections from unions and minimum wage laws. But new technologies have been the largest source of the new inequality.

Accordingly, SBTC has made it harder for workers without postsecondary credentials, especially those without bachelor’s (BA) degrees, to join or remain part of the middle class, while those with BAs and higher have thrived. SBTC has particularly thinned the ranks of jobs in the middle of the earnings distribution for workers with high school or less education, like production and clerical jobs, which allowed workers who held them to join the middle class in larger numbers in the decades after World War II.

And in regions where large numbers of these jobs have disappeared, such as the industrial Midwest and rural areas, broader declines in their local economies have hurt workers in other industries as well. Those who cannot or will not earn a postsecondary degree or relocate to an economically growing region – perhaps because of strong social ties or another barriers to work (like a substance addiction, a criminal record or a disability) – will remain in those areas and perhaps forego work permanently.

Is this time different?

While new digital technologies and other forces have caused rising inequality in late 20th and early 21st century, many Americans now fear a new and potentially more threatening form of automation, where even those with BAs or professional degrees could become displaced and dislodged from the middle class. Thus, it is possible that the employment consequences of this new automation will be more negative for the middle class than they were during any episode in the past.

The greater potential for future employment loss exists because of the much greater potential reach of artificial intelligence (AI) into what have until now been exclusively human functions. AI’s ability to read patterns in the physical environment and in human interactions, as well as its ability to learn over time and adjust itself accordingly, will likely enable robots and other forms of automation to perform tasks that historically have been undertaken by humans.

On the more positive side, the higher productivity associated with such automation will reduce production costs and prices of a wide range of goods and services, effectively raising overall incomes and consumer spending, which will then generate millions of new jobs in existing and new industries. In addition, people in the jobs being at least partially automated will increasingly be able to focus on other tasks that robots and automation still cannot perform. Various forms of social interaction, more complex modes of analyzing data and making judgments, or more creative tasks will remain primarily within the human realm for the near future.

This means that, within most jobs, some tasks will be automatable and some will not. The higher the percentage of tasks in any job that can be automated, the greater the likelihood of worker displacement; and the greater the ability of the worker to learn new tasks on the job, the greater their likelihood of being retained by their employer and retooled for new tasks.

Jobs impact: what do we know?

Given these facts, can we estimate what fractions of U.S. workers face potential displacement, and in which jobs and industries? And does this information give us a greater sense of how to help more workers enter or remain in the middle class?

Over the past few years, estimates of potential displacement have been generated by analysts with two kinds of information: 1) the task content of occupations today in the U.S. and other industrial countries; and 2) estimates by computer scientists of which tasks will become readily automatable over the next few decades.

Still, the estimates of task automation and therefore potential job displacement rates are instructive, and merit some attention here. I therefore present a few such estimates from a very recent study by economists at the Organization for Economic Cooperation and Development (OECD).

OECD estimates worker displacement rates in the U.S. and other OECD countries in the next few decades, distinguishing those who will almost certainly be displaced, with task replacement rates of 70 percent or higher, from those facing high potential task replacement – in the 50-70 percent range – but who still have potential for task readjustment and retraining within their current jobs.

The results show that, in the U.S. and elsewhere, about 10 percent of workers will face high risks of complete displacement, while another 30 percent or so will face significant potential task replacement risk.

Some specific occupations have higher or lower potential task replacement rates, as the next chart shows:

Occupations where a great deal of routine tasks are performed, such as machine or vehicle operators and unskilled laborers, face the highest potential displacement risks; while those requiring more nuanced social interactions and analysis, like doctors, lawyers and managers more generally, face moderate risks (in the range of 20-30 percent) but will likely not be completely displaced.

Viewing these estimates, it is immediately apparent that automation will continue to have a “skill bias” – in other words, less educated workers will remain at higher risk of direct displacement than most with postsecondary education, and will likely face lower demand in the labor market – resulting in lower wages and employment for them. Accordingly, not only will substantial fractions of workers face direct displacement – some of whom can be retrained for new jobs and some not – but overall labor market inequality between those with and without college education will continue to rise. Though rising productivity should help raise compensation levels overall, the skill bias of automation will likely leave many workers worse off, even if they take new jobs.

In addition, those with more education will more easily retrain for new tasks at work or completely new jobs than those without it, and the better-educated will also be more open to relocating to regions with stronger economic growth. Accordingly, the more highly educated members of the middle class will continue to have higher odds of remaining there, while the less educated in the middle class may find their positions there increasingly precarious.

And, if this occurs, the fallout might well be political as well as economic. The populism and nationalism that we find in industrialized democracies around the world might well grow stronger, and render politics in many countries more chaotic and polarized. This, in turn, would likely impede governments there from enacting policies to help protect the middle class from the worst effects of automation, and assist in the adjustments to labor market disruptions that will inevitably occur.

Five policies to help the middle class

A range of sensible policies at the federal and state levels can help limit worker risks of displacement and support adjustments when such displacements occur.

Education for 21st century skills

For instance, students at all levels of education will need better preparation in what are often called “21st century skills.” These include communication and a range of social and interactive skills (such as the ability to work in teams), as well as critical thinking and various kinds of problem-solving abilities. Workers with more such skills will likely be more complementary with and less substitutable by automation, while also finding it easier to retrain for a range of new tasks that will need to be performed following automation. It thus makes sense that educators in primary, secondary and postsecondary institutions should put greater emphasis on teaching such general skills, and policies should encourage this.

But the issue is complicated by the following fact: the programs that most successfully raise skills among low-income workers (when rigorously evaluated), and prepare them for possible entry into the middle class, tend to be specific to a sector with high unmet demands for workers, such as health care, advanced manufacturing, information technology, and transportation/logistics. Such programs include those based on partnerships between industry representatives and community colleges, brought together by a knowledgeable intermediary; or those more specific to employers in these industries, like apprenticeship.

Of course, the narrower and more specific the training, the greater the risks of displacement for workers trained in those skills in a dynamic and uncertain future labor market. On the other hand, given the fact of high current demand and compensation for such skills, and possible entry into the middle class for workers who have them, it would make it foolish not to provide them to currently low-income workers. Still, having more of the 21st century skills would likely help workers perform better in these jobs, and make them more trainable in the future. Thus, wherever they can be provided, a good mix of general and specific skills should be imparted to those being trained for specific occupations and industries. This should be true in high-quality career and technical education (CTE) beginning in secondary school as well as in higher education.

Lifelong learning accounts / Retraining support

Besides earlier educational preparation, what more can be done to help workers adapt when automation threatens them with job displacement? Financial support for retraining could be important here, and two forms of such funding could particularly help. First, states can create personal “lifelong learning” accounts into which workers pay a small percent of payroll each period, thus creating a fund for new education or training if/when displacement occurs. Indeed, the states of Maine and Washington already do so, and more might soon join them. Second, federal or state governments can provide financial and technical assistance to firms for on-the-job training, thereby encouraging employers to retrain rather than displace their employees.

This will have some appeal to employers, since it will enable them to retain employees with good track records, and avoid recruitment and screening costs for new employees of uncertain ability. And the new expenditures could perhaps be financed by a new tax on worker displacement, since these impose a cost on society which employers should be required to partly carry.

Workforce support

A wider range of additional policy initiatives could help as well. First, a more robust set of workforce services would help workers reeducate and relocate themselves. Information on career and training options as well as new jobs are available to workers at over 3000 of “America’s Job Centers” (formerly called One-Stop offices) around the country, though their funding and therefore their staffing quality has been quite limited. Improving the academic and career guidance available at community colleges would be very important as well.

Wage insurance

Many workers who will be displaced over time but not retrained will likely face a future of lower wages than before or no work at all. For them, some form of “wage insurance” makes sense. Such a program would compensate displaced workers for some part of their job loss – say, half – for a period of years. Thus, workers who formerly earned $30 an hour but only $20 now would receive $5 from the government for a period of time, such as five years. They would therefore be incentivized to work and would also receive needed financial assistance at the same time.

Unemployment and Disability Insurance reform

Finally, a set of reforms in our Unemployment and Disability Insurance systems might be warranted too. Reforms in the former might encourage them to obtain new training while they receive their stipends, while those in the latter would encourage workers and employers to engage in work rather than permanent lack of work for those with only moderate conditions.

I believe this set of proposals makes vastly more sense than others, like guaranteed public employment or universal basic income payments, which would be hugely more expensive to implement and would not incentivize everyone to do whatever is necessary to stay employed.

Employers: Take the high road

Though most of our policies above are aimed at helping workers adjust to automation, we should also have an overall policy goal for the employer side of the job market: encouraging them to take the “high road” in worker compensation and to actively invest in their employees as workplaces automate, to increase their productivity and performance. Economists have long argued that, in many industries, employers have some choice over whether to compete on the basis of high worker productivity and performance or merely low labor costs. Employers choose between these strategies, often called “high road” and “low road” approaches, early in the lives of their establishments, though they can also shift their strategies over time. Since “high road” employers invest more in their workers’ skills, and often allow them more “voice” in the operation of the workplace, they will be more open to implementing new automation in ways that are “worker-friendly” and to retraining them over time. Such approaches should be encouraged by policy at all levels.

Overall, the policies I propose will not completely protect the middle class from the disruptions and displacements they will likely face in the coming decades. But they will certainly help millions of workers join this class or remain there in the face of displacement risks, and will raise the net positive effects on workers that always result from automation.

]]>
By Harry J. Holzer
Are U.S. workers now threatened by a new and powerful form of automation that could displace tens of millions from their current jobs and dislodge them from the middle class? If so, are college-educated or professional workers at the upper range of the middle class as much threatened as those with fewer such credentials at the lower end? And can policy do much to protect the middle class status of either group?
Old fears, new trends
The fear that automation will eliminate millions of jobs, leaving masses of workers jobless, has periodically emerged in industrialized countries at least since the Luddites first made that claim in Britain in the mid-19th century. In the US, such fears occasionally surface as well, as they did during a brief “automation scare” in the late 1950's and early 1960's, when a wide swath of workers felt some risk of displacement.
To date, these fears have never proven accurate in any industrial country. New jobs always emerge to replace those that have been lost. This is true because automation raises worker productivity and reduces the costs and prices of goods and services, which makes consumers richer. They can now afford to buy more products than before, which then creates new jobs for workers to fill.
But there are costs – even for the middle class
The adjustment process I describe above does not mean that no one suffers from automation. Some workers are directly displaced from their existing jobs; perhaps they can retrain for another job in the same firm or industry, and perhaps not. Most in the latter situation become unemployed, and suffer lengthy spells without work – sometimes for years – before accepting new jobs at lower wages or leaving the work force altogether.
Displaced workers who are older or less educated are more likely to leave the labor force rather than retrain for another job. For these workers, the thought of returning to a 2- or 4-year college to learn a new skill, or to start a low-wage entry level job as a trainee, is extremely unappealing – and may not be worth it if they only have a decade or two left to work. Those who were unsuccessful at school earlier in their lives, and emerged with at most a high school diploma, are especially poor candidates for more education later. And facing the prospect of nothing but low-wage work for the rest of their lives can discourage them from ever taking another job.
And automation can hurt workers beyond those directly displaced. Since 1980, and perhaps well before, economists believe new technologies have been “skill-biased” – meaning that they substitute for less-educated workers broadly in the labor market, reducing the demand they face for their labor and thus reducing their wages and employment rates. In contrast, those with at least some postsecondary education, especially with bachelor (BA) degrees or higher, tend to complement the new technologies in a variety of ways – as engineers or technicians, or those who market and sell the new products, or those providing the health care that we demand with our higher incomes, or whose creativity in music or writing can now be enjoyed by vastly greater audiences. The employment rates and earnings of these complementary workers rise as a result of automation.
Indeed, most labor economists believe that skill-biased technical change (SBTC) has been the largest cause of growing inequality between college-educated and other workers in the past four decades. Other forces have mattered as well – like globalization and weakening protections from unions and minimum wage laws. But new technologies have been the largest source of the new inequality.
Accordingly, SBTC has made it harder for workers without postsecondary credentials, especially those without bachelor’s (BA) degrees, to join or remain part of the middle class, while those with BAs and ... By Harry J. Holzer
Are U.S. workers now threatened by a new and powerful form of automation that could displace tens of millions from their current jobs and dislodge them from the middle class? If so, are college-educated or professional workers at ... https://www.brookings.edu/research/how-to-adjust-to-automation/How to adjust to automationhttp://webfeeds.brookings.edu/~/585518750/0/brookingsrss/topics/technology~How-to-adjust-to-automation/
Thu, 13 Dec 2018 05:01:30 +0000https://www.brookings.edu/?post_type=research&p=552463

]]>
By Robert E. Litan

President Clinton’s first inaugural address contained a phrase that is as relevant today as it was then: “the urgent question of our time is whether we can make change our friend and not our enemy.” It’s a phrase that he never used again, as far as I can tell, any other time over his eight years in office.

It’s not difficult to understand why. Many people are fearful of change, even though evolution and thus constant change—biological, political, and organizational—is a central part of life and history. Changes that come too rapidly, pushing people out of jobs or suppressing their wages, without institutions or policies able to restore the status quo, are especially disturbing. We see the results all around us today: high and rising rates of opioid abuse, and political and religious anger that is dividing our society and our politics.

While President Trump has exacerbated these divisions, the underlying trends have been in the making over last three decades of stagnant wages (moderate increases in total compensation have been swallowed up by rising health care costs) and increasing income and wealth inequalities. In combination, the two trends are putting the American Dream of rising living standards out of reach for ever larger numbers of our citizens.

Nothing has been more important in driving increased inequality and the threat it poses to the American Dream than technological change, or “automation.”

Many villains have been blamed for these developments, including increased trade and outsourcing, the rise of the gig economy and freelance work, the decline in unionization, and tax reform tilted to the rich. But nothing has been more important in driving increased inequality in particular and the threat it poses to the American Dream than technological change, or “automation”—covering the combination of advances in artificial intelligence (AI), semiconductors, and robotics. That is because technological changes over the past several decades have favored those with advanced skills, especially those in information technology.

Addressing the challenge of automation: The need for lifelong learning

The logical response to continuing automation is for the government to strike a new 21st century bargain with its citizens. The government should help them to help themselves throughout their working lives to upgrade or acquire the skills the market demands to fill the new jobs that will be created by automation, either directly (such as making AI work better) or indirectly (working in other sectors, such as entertainment, leisure, and health, where consumers are likely to spend much of the cost savings generated by automation). This can be accomplished through government matches to tax-deferred training accounts and/or through lifetime training loan accounts, with repayments tied to future income in ways discussed shortly that minimize federal subsidies and thus pressures on an already excessive and mounting federal deficit.

The case for assisting lifetime training is especially strong given the inability of most Americans to pay for training when they want or need it (ideally before they are forced to). As it is, four in 10 Americans can’t meet a $400 financial emergency without borrowing or help from family and friends. Almost 8 in 10 are living from paycheck to paycheck.

The illusion of cost-free solutions

But federal elected officials or candidates, in both parties, have taken very different approaches: advancing seemingly cost-free “solutions” to stagnant and unequal wages that pay little or no attention to the lifetime challenges automation poses to all workers, which some of these “solutions” could aggravate.

Tariffs protecting U.S. industries and firms in the name of “fair trade” simplistically seem to promise U.S. workers that their jobs or wages, or both, will also be better protected. Not since the 1930s has an American administration embraced tariff hikes as enthusiastically as has the Trump Administration. Few Democrats have challenged the administration’s reversal of trade liberalization—a central tenet of the post-War international order spanning seven decades.

The benefits of trade protection are temporary, it not illusory, however. Higher tariff walls will not stop automation, and thus, the ongoing replacement of workers with machines.

The benefits of trade protection are temporary, if not illusory, however. Higher tariff walls will not stop automation, and thus, the ongoing replacement of workers with machines. Even when new tariffs induce shut down plants to reopen—a rarity—only limited numbers of former workers tend to be called back, either because others have moved, or because the plants have been modernized to require fewer workers than before.

Moreover, any “benefits” of trade protection for import-competing firms and workers come with costs, both to workers at these firms—through higher prices they pay for the goods they buy—and to workers in exporting firms, whose costs go up because of tariffs and whose foreign markets shrink when other countries retaliate, as China and Europe have done in response to the Trump tariffs.

Democrats have supported other policies that promise immediate wage increases, primarily for those at the bottom of the income scale: stronger labor laws to facilitate formation of unions and their exercise of bargaining power to lift wages, and a substantial increase in the federal minimum wage, to $15/hour (up from the current $7.25, which is now exceeded in most states). Such policies indeed will raise wages for those fortunate enough to keep their jobs, but wage increases that outpace productivity advances also will encourage firms to automate more quickly, inducing more labor market disruption and anxiety, while raising costs and hence prices of goods and services that people buy. Like tariffs, the cost-raising impacts of these policies are hidden from view—the best kind since that makes the proposals seem costless—since no seller earmarks what portion of its prices are due to any single policy or set of them.

For those higher up the income scale, or aspiring to be, more Democrats are embracing tuition-free four-year college at state universities, or as a fallback, tuition-free community college. There is a strong case, given abundant evidence linking educational attainment and lifetime earnings coupled with the benefits to the economy and to society of a more educated public, to be made for reducing the financial burdens of paying for college so more students can attend and complete their degrees. But given the current enormous federal deficit, now in the range of five percent of GDP and projected to double over the next two decades, any proposals for additional federal spending must generate the greatest public benefit for the buck and be paid for, either through higher taxes or reduced spending elsewhere.

The Obama administration’s proposal to pay for students’ community college tuition has an easier time meeting both tests than free college for every student from families earning less than $125,000—the income threshold in the plans supported by Senator Sanders and Democratic presidential candidate Hillary Clinton during the last campaign. For one thing, the price tag of the community college proposal, at roughly $6 billion a year over 10 years, is about 1/7 of the $40 billion annual cost of Sanders’ original free four-year college proposal (which also included free community college tuition), and thus would require far less additional revenue or expenditure reduction. The community college proposal is also more defensible on equity grounds, since attendees of community colleges are likely to come from lower income families than those who go on to four-year colleges. Furthermore, free tuition at community colleges would benefit some adults already in the workforce who want to upgrade their skills or change careers; this is much less likely for those enrolled in four-year colleges.

The deficient political response

Both the community and four-year college proposals are designed to benefit mainly young people, right after high school, but neither addresses the continuing lifetime needs for skills upgrading of people once they finish their formal schooling. Why is this the case? One possibility is that politicians would rather promise voters what is or can be construed to be an immediate solution, ideally at no cost to them, from their work-related anxieties than tell them their largest challenges are life-long and must be met through shared responsibility. As I envision it, the federal government should provide the up-front financing for training you undertake and eventually pay for, to the extent your future earnings allow. Those with higher earnings paths would repay more than they borrow, with interest (up to a cap), thereby offsetting the costs of those who do less well. Still, income-contingent retraining loans, like those available for college today, are a better deal than having the government guarantee your mortgage, which lowers the interest rate you pay modestly, but whose principal you are obligated to pay back regardless of your future income (and if you default, you can lose the equity in your home).

A second possibility is that while most workers realize that automation could one day put their current jobs at risk—more than 7 in 10 Americans worry about this, according to a 2017 Pew Center survey—they may have little faith that once they obtain new skills they will be able to use them in jobs paying reasonable wages. This fear is understandable, since even labor market experts cannot predict with great precision what kinds of jobs will be most in demand 10 or 20 years out. Nonetheless, politicians can reduce workers’ angst in several ways.

Despite the current angst about automation, the U.S. economy has been operating at or close to full employment in recent years.

They should remind voters of what over two hundred years of economic progress demonstrates: by making goods and services cheaper, automation enables consumers to spend more on other things, which creates jobs elsewhere in the economy. That is one reason why, despite the current angst about automation, the U.S. economy has been operating at or close to full employment in recent years, and why employers’ main complaint is that they can’t find enough qualified workers.

More specifically, looking out well into the future, with machines and software taking on more chores, it is likely that consumers will increase their demand for services with a personal touch—such as repairs and advice of all kinds—along with more health care and entertainment. Society’s challenge is to equip as many people who are willing and able with both the technical and interpersonal skills to fill these jobs, and to do so on an ongoing basis, as labor market needs continue to change.

Lifetime loan accounts and government-matched tax deferred savings accounts for retraining, which the federal government can and should provide, act like career or lifetime income insurance for workers who make the effort of keeping their skills current to meet employers’ demands. The value of that insurance would be enhanced if the federal government also required educational institutions of all types to provide current data on their performance—namely their placement rates, by field of study, for those who complete their training—so that workers can make informed choices of the benefits and risks of acquiring specific skills or embarking on new career paths.

Another reason why so few politicians, at least so far, have made lifetime work transitions a higher priority is that many of the people who would benefit from government help have little or no faith in new government programs of any type—especially when proposed by the Democratic party, whose values they no longer identify with or trust. This skepticism of government explains why so many voters supporting President Trump who have benefited from the Affordable Care Act (“Obamacare”) nonetheless opposed it from the very beginning (although once the Trump Administration began dismantling portions of the ACA, its popularity has risen). In other words, in our increasingly politically polarized society where your party affiliation identifies your “tribe,” it’s not the message that counts in politics, but the messenger.

Which leads to my fourth explanation: that precisely because of intense political polarization, elected officials in neither party have an incentive to agree to policies that appeal to both parties’ members, but if adopted, could all too easily be portrayed as a “win” for the other side. When Barack Obama was president, Republicans largely opposed anything he proposed, and except for the Trump Administration’s stance on trade, Democrats have returned the favor. Party members seem especially reluctant to support anything the other party suggests when one party controls both the executive and legislative branches, since in that event, that party gets all the credit (or blame) for any bills that become law. But even if in the future we again have divided government, which theoretically would allow both parties to claim credit for any legislation that passes, until and unless leaders of both parties recognize and act on the public’s worry about automation, workers will justifiably be anxious, if not for their jobs, then for their future wages.

The risks ahead

If past trends in real estate prices continue—which already are discouraging Americans from moving to metro areas (mostly on each coast) with hot labor markets but also increasingly pricey housing—the need for retraining workers where they live will intensify. The alternative is increasing voter anger, making political compromises at the federal level even more difficult, and a worsening of an already bad opioid epidemic in small town and rural America.

These adverse consequences can be minimized if automation trends are gradual, and if the national economy and hence, most labor markets, remain strong. In that event, workers can relatively quickly find new jobs in most areas of the country, either because they want to switch jobs or they must do so.

If our elected officials do not act when the proverbial economic sun is shining, the political and economic clouds will much be darker when that sun is gone.

But if history has demonstrated anything, it is that economies don’t stay strong forever; they experience ups and downs. When the next recession or slower growth hits, many U.S. workers will find it more difficult to adjust to the disruptive impacts of automation than they do now. If our elected officials do not act when the proverbial economic sun is shining, the political and economic clouds will much be darker when that sun is gone.

]]>
By Robert E. Litan
President Clinton’s first inaugural address contained a phrase that is as relevant today as it was then: “the urgent question of our time is whether we can make change our friend and not our enemy.” It’s a phrase that he never used again, as far as I can tell, any other time over his eight years in office.
It’s not difficult to understand why. Many people are fearful of change, even though evolution and thus constant change—biological, political, and organizational—is a central part of life and history. Changes that come too rapidly, pushing people out of jobs or suppressing their wages, without institutions or policies able to restore the status quo, are especially disturbing. We see the results all around us today: high and rising rates of opioid abuse, and political and religious anger that is dividing our society and our politics.
While President Trump has exacerbated these divisions, the underlying trends have been in the making over last three decades of stagnant wages (moderate increases in total compensation have been swallowed up by rising health care costs) and increasing income and wealth inequalities. In combination, the two trends are putting the American Dream of rising living standards out of reach for ever larger numbers of our citizens.
Nothing has been more important in driving increased inequality and the threat it poses to the American Dream than technological change, or “automation.”
Many villains have been blamed for these developments, including increased trade and outsourcing, the rise of the gig economy and freelance work, the decline in unionization, and tax reform tilted to the rich. But nothing has been more important in driving increased inequality in particular and the threat it poses to the American Dream than technological change, or “automation”—covering the combination of advances in artificial intelligence (AI), semiconductors, and robotics. That is because technological changes over the past several decades have favored those with advanced skills, especially those in information technology.
Addressing the challenge of automation: The need for lifelong learning
The logical response to continuing automation is for the government to strike a new 21st century bargain with its citizens. The government should help them to help themselves throughout their working lives to upgrade or acquire the skills the market demands to fill the new jobs that will be created by automation, either directly (such as making AI work better) or indirectly (working in other sectors, such as entertainment, leisure, and health, where consumers are likely to spend much of the cost savings generated by automation). This can be accomplished through government matches to tax-deferred training accounts and/or through lifetime training loan accounts, with repayments tied to future income in ways discussed shortly that minimize federal subsidies and thus pressures on an already excessive and mounting federal deficit.
The case for assisting lifetime training is especially strong given the inability of most Americans to pay for training when they want or need it (ideally before they are forced to). As it is, four in 10 Americans can’t meet a $400 financial emergency without borrowing or help from family and friends. Almost 8 in 10 are living from paycheck to paycheck.
The illusion of cost-free solutions
But federal elected officials or candidates, in both parties, have taken very different approaches: advancing seemingly cost-free “solutions” to stagnant and unequal wages that pay little or no attention to the lifetime challenges automation poses to all workers, which some of these “solutions” could aggravate.
Tariffs protecting U.S. industries and firms in the name of “fair trade” simplistically seem to promise U.S. workers that their jobs or wages, or both, will ... By Robert E. Litan
President Clinton’s first inaugural address contained a phrase that is as relevant today as it was then: “the urgent question of our time is whether we can make change our friend and not our enemy.https://www.brookings.edu/research/the-impact-of-artificial-intelligence-on-international-trade/The impact of artificial intelligence on international tradehttp://webfeeds.brookings.edu/~/585518754/0/brookingsrss/topics/technology~The-impact-of-artificial-intelligence-on-international-trade/
Thu, 13 Dec 2018 05:01:16 +0000https://www.brookings.edu/?post_type=research&p=552644

]]>
By Joshua P. Meltzer

Artificial intelligence (AI) stands to have a transformative impact on international trade. Already, specific applications in areas such as data analytics and translation services are reducing barriers to trade. At the same time, there are challenges in the development of AI that international trade rules could address, such as improving global access to data to train AI systems. The following provides an overview of some of the key AI opportunities for trade as well as those areas where trade rules can help support AI development.

What do we mean by artificial intelligence?

Before proceeding to the impact of AI on trade, it is important to clarify what is meant by AI. More specifically, that there is a key difference between narrow AI such as translation services, chatbots, and autonomous vehicles and general AI—“self-learning systems that can learn from experience with humanlike breadth and surpass human performance on all tasks.” General AI raises broader existential concerns, such as how to align the goals of such a system with our own to prevent catastrophic outcomes,1 but general AI remains a technology still to be developed in the distant future.

To understand the potential significance of narrow AI for trade, it is also important to briefly consider its core parts. In particular, narrow AI is based on machine learning, which uses large amounts of data and powerful algorithms to develop increasingly robust predictions about the future.2 The data used for machine learning can be either supervised—data with associated facts, such as labels—or unsupervised—raw data that requires the identification of patterns without prior prompting.3 This includes reinforcement learning—where machine-learning algorithms actively choose and even generate their own training data.

Another key development underpinning narrow AI is the Deep Neural Network (DNN). DNNs are comprised of layers of nonlinear transformation node functions, where the output of each layer becomes an input to the next layer in the network. Each layer is highly modular, making it possible to take a layer optimized for one type of data (say, images) and to combine it with other layers for other types of data (e.g., text).4 Deep Neural Networks combine multiple machine learning tasks—creating what is referred to as general purpose machine learning (GPML)—which allows AI to effectively live on top of the types of chaotic data that humans are able to digest, such as video, audio, and text.

Narrow AI also includes specific tools such as out-of-sample validation to validate models, stochastic gradient descent for training models on streams of data, and graphical processing units (GPUs)—originally developed for video games but which have proven well-suited to support the types of massive parallel computations needed to train DNNs.5

Applying these developments in a real-world context requires large data sets to initialize AI systems. Here, quantity matters because machine learning needs to be able to incorporate into future predictions as many possible past outcomes as possible. This means that access to the tails of data—less usual and irregular data—matters.

The impact of AI on economic growth and international trade

The development of AI will affect international trade in a number of ways. One is the macroeconomic impacts of AI and the related trade effects. For instance, should AI increase productivity growth, then this will increase economic growth and provide new opportunities for international trade. Current rates of productivity growth globally are low and there are various suggested causes.6 One reason for low productivity growth particularly relevant for understanding the potential link with AI is that it takes time for an economy to incorporate and make effective use of new technologies, particularly complex ones with economy-wide impacts such as AI.7 This includes time to build a large enough capital stock to have an aggregate effect and for the complimentary investments needed to take full advantage of AI investments, including access to skilled people and business practices.8

AI will also affect the type and quality of economic growth, with international trade implications. For instance, AI is likely to accelerate the transition towards services economies.

AI will also affect the type and quality of economic growth, with international trade implications. For instance, AI is likely to accelerate the transition towards services economies. This is a corollary to concerns about the impact of AI and jobs, as AI is likely to expand automation and speed up job losses for low-skill, blue-collar workers in manufacturing fields.9 In parallel, AI will also emphasize particular worker skills as it is used to add value to production and products. This should lead to further expansion of the share of services in production as well as international trade.

Specific AI applications to international trade

AI and global value chains

AI is already having an impact on the development and management of global value chains. It can be used to improve predictions of future trends, such as changes in consumer demand, and to better manage risk along the supply chain. By allowing business to better manage complex and dispersed production units, such tools improve the overall efficiency of GVCs. For example, business can use AI to improve warehouse management, demand prediction, and improve the accuracy of just-in-time manufacturing and delivery. Robotics can increase productivity and efficiency in packing and inventory inspection. Business can also use AI to improve physical inspection and maintenance of assets along supply chains.

The development of GVCs will be affected by the broader trends toward using AI to develop smart manufacturing. For instance, the German-led conception of industry 4.0 is based on sensors, IoT, and cyber-physical-systems that connect machines, material, supplies, and customers. This will include capacity at the factory level of predictive machines and self-maintenance, complete communications between companies along the supply chain, and the ability to manufacture according to customer specifications, even in small or single batches.10 Such developments could strengthen and extend GVCs. For example, smart manufacturing with its emphasis on connectivity could open up GVCs to more specific participation by specialized service suppliers in areas such as R&D, design, robotics, and data analytics tailored to discrete tasks in the supply chain.

Yet AI could also create trends toward on-shoring of production. Broader automation opportunities as well as scaling of 3D printing could reduce the need for extended supply chains—particularly those that rely on large pools of low-cost labor. The result could accelerate the process Dani Rodrik describes as “premature industrialization” in developing countries.11

Trade using digital platforms

Another area where AI is already being deployed is on digital platforms such as eBay. For small business in particular, digital platforms have provided unprecedented opportunity to go global. In the U.S., for instance, 97 percent of small businesses on eBay export, compared to just 4 percent of offline peers.12

For small business in particular, digital platforms have provided unprecedented opportunity to go global.

AI-developed translation services are further enabling digital platforms as drivers of international trade. For example, as a result of eBay’s machine translation service, eBay-based exports to Spanish-speaking Latin America increased by 17.5 percent (value increased by 13.1 percent).13 To put this growth into context, a 10 percent reduction in distance between countries is correlated with increased trade revenue of 3.51 percent—so a 13.1 percent increase in revenue from eBay’s machine translation is equivalent to reducing the distance between countries by over 35 percent.

Trade negotiations

AI also has the potential to be used to improve outcomes from international trade negotiations. For instance, AI could be used to better analyze economic trajectories of each negotiating partner under different assumptions, including outcomes contingent on trade negotiation (growth pathways under various forms of trade liberalization), how these outcomes are affected in a multiplayer scenario where trade barriers are adjusted down at different rates, as well as predicting the trade response from countries not party to the negotiation. Brazil has already established an Intelligent Tech & Trade Initiative that includes using AI to improve trade negotiations.14

Developing trade rules to support AI

In addition to the impact of AI on international trade patterns, trade rules as reflected in the WTO and in FTAs can also play a role in supporting the development of AI. The following outlines some key areas where trade rules will matter for AI development and deployment globally.

The importance of data for AI

Data localization measures that restrict the ability to move data globally will reduce the capacity to develop tailored AI capacities.

Trade commitments on the free flow of data globally, as reflected in the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP) and, more recently, in the United States-Mexico-Canada Agreement (USMCA) will support the development of AI. As outlined above, access to large amounts of data is needed to train AI systems. Building AI systems that can respond to diverse challenges and different population groups requires access to global data. To take a relatively straightforward example, the development of speech-recognition AI requires access to large amounts of speech data that can capture local slang and intonation as well as less commonly used words. As a result, data localization measures that restrict the ability to move data globally will reduce the capacity to develop tailored AI capacities.

Moreover, the development and use of AI builds on other digital technologies, the key ones being cloud computing, big data, and the internet-of-things.15 These digital technologies also rely on cross-border data flows. This means that data localization measures that restrict global data transfers will hit AI directly, by providing less training data, and indirectly, by undercutting the building blocks on which AI is built.

Restrictions on cross-border data flows are likely to have the greatest impact on smaller (often developing) countries. The U.S. and China, with large internal populations, are less reliant on access to data from third countries to develop AI capabilities tailored to their domestic markets. However, to develop AI in areas such as health care, countries with smaller populations will require access to global health data. Limits on access to such data will reduce the accuracy and relevance of AI systems for developing countries.

Improving access to data for AI development will also require governments, as repositories of large data sets, making such data publicly available. Here, USMCA makes progress, including a recognition by the Parties of the importance of access to government information for economic and social development, and to the extent possible making government data accessible in machine-readable and open format.16

Privacy and AI

Commitments to cross-border data flows in trade agreements are balanced with scope for governments to restrict data flows in order to achieve legitimate public policy objectives. Maintaining domestic privacy standards is a key reason that governments are currently reducing the flow of personal data across borders. For instance, the EU General Data Protection Regulation (GDPR) prohibits transfers of personal data to countries that have not been deemed “adequate” by the European Commission.

GDPR limits on the processing and use of personal data could adversely impact the development of AI capabilities. For instance, under GDPR, personal data can only be used for the purpose for which it was collected, which means that personal data collected as part of a transaction cannot then be used to train AI to improve how the service is delivered. The GDPR requirement that companies minimize the amount of data they collect and how long the data is kept is also at odds with developing data sets for training AI.

GDPR limits on the processing and use of personal data could adversely impact the development of AI capabilities.

On the other hand, strong privacy will be required if people are going to be able to trust living their lives online, including providing immense amounts of personal data for AI learning. From this perspective, there is no inherent trade-off between developing AI and privacy. The key challenge will be to design privacy rules that do not create unnecessary restrictions on access to and use of data. Trade rules can assist by including commitments on data-importing nations to protect the privacy of personal data from the data-exporting country. This could be achieved by encouraging forms of mutual recognition of privacy systems as well as developing common regional and global privacy principles.17

Standards and AI

The incorporation of AI into industry will require the development of a range of new standards. Take autonomous vehicles, which will require various technical standards, safety standards, and new vehicle manufacturing standards. The development of different domestic standards across countries will increase costs for foreign manufacturers who have to retool in order to export. The USMCA addresses this issue with commitments that domestic standards are based on international standards, which will support interoperability and reduce barriers to developing AI globally.

Protection of source code

Requiring access to source code as a condition of investment or market access poses another challenge to the development of AI. Requiring such access was identified by the Office of the United States Trade Representative (USTR) as part of the broader issue of forced technology transfer in China.18 As AI is based on algorithms, conditioning market access on providing access to source code operates as an international trade barrier that reduces the diffusion of AI globally.

The U.S. and other countries have started to respond to this concern. In the CPTPP and USMCA, the parties have agreed not to “require the transfer of, or access to, source code of software owned by a Person of another party” as a condition for import or sale.19

Intellectual property protection and AI

The development of AI raises intellectual property (IP) issues with international trade implications. As noted, AI relies on large amounts of input data. Training data will often need to be copied and edited for use. Depending on how the data is collected, this could involve unauthorized copying of thousands of protected works. In the U.S., it may be that relying on the “transformative” or “non-expressive” fair use exception to copyright protection will provide legal cover for such use of data.20 Fair use provides a flexible principles-based set of copyright exceptions.21 Fair use exceptions have been a significant legal underpinning in the development, and demise, of digital business models in the U.S.22 Yet, even in the U.S., whether fair use exceptions will cover some of the more complex uses of data to train AI remains to be tested.23

Even in the U.S., whether fair use exceptions will cover some of the more complex uses of data to train AI remains to be tested.

Furthermore, fair use exceptions or similar copyright flexibilities do not exist in many other countries. For instance, the EU uses a specific list of exceptions to copyright law that does not include text and data mining and would not seem to include AI. Australia adopts a similar approach as the EU.24 From an international trade perspective, this means that legal copying of data to develop AI in the U.S. might be deemed illegal in other countries, creating a barrier to deployment of AI in these countries.

Trade agreements have been hesitant in addressing copyright flexibilities. The CPTPP includes a recognition by the Parties of the need to achieve “an appropriate balance in its copyright and related rights systems,”25 but this goal of achieving a copyright balance was absent from the more recent USMCA.

AI and trade in goods

While much of AI development is focused around access to data, standards, and IP, access to goods will also affect AI development globally. In particular, and as noted above, CPUs are a key hardware used in Deep Neural Networks. Trade in CPUs is therefore needed for the development of AI globally. This underscores the ongoing role for reducing tariffs in supporting access to the technologies needed for AI development.

]]>
By Joshua P. Meltzer
Artificial intelligence (AI) stands to have a transformative impact on international trade. Already, specific applications in areas such as data analytics and translation services are reducing barriers to trade. At the same time, there are challenges in the development of AI that international trade rules could address, such as improving global access to data to train AI systems. The following provides an overview of some of the key AI opportunities for trade as well as those areas where trade rules can help support AI development.
What do we mean by artificial intelligence?
Before proceeding to the impact of AI on trade, it is important to clarify what is meant by AI. More specifically, that there is a key difference between narrow AI such as translation services, chatbots, and autonomous vehicles and general AI—“self-learning systems that can learn from experience with humanlike breadth and surpass human performance on all tasks.” General AI raises broader existential concerns, such as how to align the goals of such a system with our own to prevent catastrophic outcomes,1 but general AI remains a technology still to be developed in the distant future.
To understand the potential significance of narrow AI for trade, it is also important to briefly consider its core parts. In particular, narrow AI is based on machine learning, which uses large amounts of data and powerful algorithms to develop increasingly robust predictions about the future.2 The data used for machine learning can be either supervised—data with associated facts, such as labels—or unsupervised—raw data that requires the identification of patterns without prior prompting.3 This includes reinforcement learning—where machine-learning algorithms actively choose and even generate their own training data.
Another key development underpinning narrow AI is the Deep Neural Network (DNN). DNNs are comprised of layers of nonlinear transformation node functions, where the output of each layer becomes an input to the next layer in the network. Each layer is highly modular, making it possible to take a layer optimized for one type of data (say, images) and to combine it with other layers for other types of data (e.g., text).4 Deep Neural Networks combine multiple machine learning tasks—creating what is referred to as general purpose machine learning (GPML)—which allows AI to effectively live on top of the types of chaotic data that humans are able to digest, such as video, audio, and text.
Narrow AI also includes specific tools such as out-of-sample validation to validate models, stochastic gradient descent for training models on streams of data, and graphical processing units (GPUs)—originally developed for video games but which have proven well-suited to support the types of massive parallel computations needed to train DNNs.5
Applying these developments in a real-world context requires large data sets to initialize AI systems. Here, quantity matters because machine learning needs to be able to incorporate into future predictions as many possible past outcomes as possible. This means that access to the tails of data—less usual and irregular data—matters.
The impact of AI on economic growth and international trade
The development of AI will affect international trade in a number of ways. One is the macroeconomic impacts of AI and the related trade effects. For instance, should AI increase productivity growth, then this will increase economic growth and provide new opportunities for international trade. Current rates of productivity growth globally are low and there are various suggested causes.6 One reason for low productivity growth particularly relevant for understanding the potential link with AI is that it takes time for an economy to incorporate and make effective use of new technologies, particularly complex ones with economy-wide impacts such ... By Joshua P. Meltzer
Artificial intelligence (AI) stands to have a transformative impact on international trade. Already, specific applications in areas such as data analytics and translation services are reducing barriers to trade.