Part I

Automated driving is coming. Last month the National Highway Transportation Safety Administration released its newly expanded Federal Automated Vehicles Policy, and the race is on to bring driverless cars to our roads on a mass scale. As with many new technologies, the law will have to catch up with the issues raised by this radically new state of affairs.

“Take me to court.” Today, a lawyer might say that to a contentious adversary (hopefully not), to an Uber driver (maybe), or to a smartphone for directions (most likely). Tomorrow, lawyers may well direct similar commands to their self-driving vehicles.

Automated driving is coming—and in some respects, it’s already here. Our roads already have vehicles with automated-driving technologies, ranging from electronic stability control (mandatory since 2011) to Tesla’s “Autopilot,” which uses “autosteer” to “keep[] the car in the current lane,” can “parallel park itself,” and can engage “traffic-aware cruise control” to “intelligently determine[s] the vehicle’s behavior in that moment relative to its surroundings.” Authorities have reached a general consensus that automation-assisted driving will continue along this spectrum from “assist” to “control” until vehicles are capable of full autonomy. Most engineers believe that 10 million automated-driving vehicles will be on our roads by 2020.

Americans love their cars. So during this time of transition, lawyers, judges, lawmakers, and policymakers must decide how they will react to this seismic societal shift. Should our approach be laissez faire, allowing development through common law, or should the law be shaped proactively through regulation or legislation? To best resolve these questions, decision-makers should first know the technology. As Judge Easterbrook has noted about lawyers and new technologies: “The blind are not good trailblazers.”

Automated driving: an evolution of the Internet of Things

Society is awash in the Internet of Things—smart, connected devices permitting us to perform tasks previously unimaginable—and few developments demonstrate the power of the Internet of Things more dramatically than automated driving.1 The U.S. Department of Transportation (DOT) and National Highway Traffic Safety Administration (NHTSA) recently stated that automated driving “may prove to be the greatest personal transportation revolution since the popularization of the personal automobile nearly a century ago.”

In the 1950s, futurists predicted flying cars, but the concept of fully autonomous vehicles was still beyond the scope of most imaginations until the 2000s. In 2007, six teams competed in the DARPA Urban Challenge to push the limits of vehicle autonomy, and around 2009, Google and other companies started moonshot initiatives to develop self-driving vehicles. It’s easy to imagine this technology giving new power to the blind and disabled, to the elderly, and to children who may never need drivers’ licenses.

As with many new technologies, the law must catch up with the issues raised by this brave new world: Crashes today are usually caused by people, and tomorrow they might be caused by algorithms. But while algorithms might cause some crashes, automated-driving proponents claim that the systems will still be far safer than humans, potentially cutting driving deaths in half and saving an estimated 16,000 lives annually. How society responds to these developments will shape many legal arenas, including regulations, statutes, the common law, and insurance.

The road to our driverless future is currently being paved. To paraphrase author William Gibson: Automated driving is already here—it’s just not evenly distributed. The DOT and NHTSA have stated that automated driving’s “rapid development” has included “partially and fully automated vehicles… nearing the point at which widespread deployment is feasible.” This revolution includes the largest automobile manufacturers (e.g., Audi, BMW, Ford, GM, Honda, Mercedes, Nissan, Tesla, Toyota, Volkswagen, and Volvo) and the largest software companies (e.g., Google, Apple). The promise of a multi-billion-dollar market has spawned a 21st-century space race: Tesla hired a prominent Apple software architect to lead its Autopilot Engineering division. GM invested $500 million in a joint venture with ride-sharing service Lyft to develop self-driving cars. And Uber has partnered with Carnegie Mellon University on the company’s mission to end car ownership and help create our “driverless future.”

Some automated-driving advancements have already made their way onto public roads. In October 2015, Tesla began allowing owners (not just test drivers) to enable “Autopilot” mode, which is not fully autonomous but seeks to assist drivers, whom Tesla encourages to keep their hands near the wheel and eyes on the road. GM announced that its similar “super cruise” system will be deployed in 2017. Toyota has introduced a suite of automated systems, including pedestrian detection and “pre-collision systems” that can apply brakes automatically. Google’s Self-Driving Car Project has driven almost 2 million miles, many of those on public roads.

Automated driving’s early days have not been without speed bumps. February 2016 saw Google’s first automation-caused crash, a fender-bender with a bus.2 In May 2016, a driver using Tesla’s Autopilot mode became the first automated-driving fatality in over 130 million miles. (In comparison, standard U.S. vehicles have one fatality every 94 million miles, and worldwide it’s every 60 million miles.) Despite these potential setbacks, the federal NHTSA is undeterred, stating that automated driving can save thousands of lives: “No one incident will derail the DOT and NHTSA from [their] mission to improve safety on roads through new life-saving technologies.”3 In March 2016, the NHTSA announced a $3.9 billion, 10–year commitment to developing automated driving safety.

The future will arrive gradually

The conversion to automated driving will not occur overnight, but the transition will come sooner than many imagine, and progress will be steady. The shift will likely follow the path of prior advancements like the internet and smartphones: What begins as luxury quickly becomes necessity.
Gradually, assistive technologies (e.g., adaptive cruise control, crash avoidance) may shift to full autonomy. Or manufacturers might instead take Google’s approach, choosing to jump straight to full autonomy.

Adoption predictions are ambitious. One report estimates that by 2020, automated vehicles will number 10 million.4 Most engineer-respondents to IEEE and IHS studies believe that manufacturers will remove rear-view mirrors, horns, and emergency brakes by 2030—and steering wheels and gas/brake pedals by 2035. Some estimate that our fully driverless future will arrive around 2050.

In the near term, the automated-driving transition will likely be gradual, moving along a spectrum from fully manual to partial assistance to fully automated. This gradation was standardized in 2013, when the international engineering vehicle-standards organization SAE International defined the levels of driving automation,5 summarized here:

L0 No automation. Human drivers are fully responsible, even if enhanced by warning (e.g., check engine) or intervention systems.

L3 Conditional automation. System drives and monitors in some instances; humans intervene when the system requests assistance.

L4 High automation. System drives and monitors, even absent human response — though only under certain environments and conditions.

L5 Full automation. System does everything a human driver can do — in all conditions.

In September 2016, the NHTSA—which had in 2013 created its own distinct definitions, using only four levels—adopted the SAE’s levels and definitions.

Google and Tesla are focusing, even at this early stage, on Level 5 full autonomy. Tesla’s founder, Elon Musk, has stated that by 2018, owners will be able to “summon” their cars by driverless cross-country trip (e.g., New York to LA). Google is also designing its initial fleet for full autonomy (removing conventional steering wheels, accelerators, brakes, etc.), expressing concern that permitting human intervention might hinder (not help) safety:

Google[] belie[ves] that the SDS [self-driving system] consistently will make the optimal decisions for the SDV [Self-Driving Vehicle] occupants’ safety (as well as for pedestrians and other road users), the company expresses concern that providing human occupants of the vehicle with mechanisms to control things like steering, acceleration, braking, or turn signals, or providing human occupants with information about vehicle operation controlled entirely by the SDS, could be detrimental to safety because the human occupants could attempt to override the SDS’s decisions.6

Google supports this conclusion by pointing to its automated-driving fleet’s 1.3 million miles driven between 2009 and January 2016, during which all 17 crashes were caused by human error.

While consumer adoption could be swift, the shift to automated driving will likely be even faster for commercial driving—starting with ride-sharing, taxis, public transportation, and long-haul trucking. For example, in February 2016 a convoy of self-driving trucks made its way across Europe, signaling a potential shift in land-based shipments. Currently, labor constitutes about 75 percent of trucking costs, and regulations require human drivers to take 8–hour breaks every 11 hours. So 24–hour non-human driving could cost 75 percent less for double the productivity (a 400 percent price-performance improvement). Similar market factors will likely affect other areas of commercial driving: taxis, buses, and other public transportation.

Potential economic and safety effects

It’s no secret that driving has always been unsafe. Those old enough to remember Ralph Nader’s book Unsafe at Any Speed have seen giant leaps in vehicle-safety improvements. But even today, U.S. auto crashes still kill about 32,000 people per year (88/day), or nearly three times the number who die in firearm homicides.7 For ages 5 to 34, crashes are the leading cause of death.8

Automakers have improved vehicle safety, but not its weakest link: humans. Federal studies reflect that over 90 percent of crashes are caused, at least in part, by human error. A 1979 federal study found that “human errors and deficiencies” caused 90-93 percent of investigated crashes; a 2001 federal study found that “a driver behavioral error caused or contributed to” 99 percent of the crashes investigated; and a 2005 study found that in causing crashes, vehicle and environmental factors were dwarfed by “human factors.” As cars become safer (e.g., anti-lock brakes, electronic stability control, side airbags, adaptive cruise control, structural design), the percentage of crashes caused by faulty vehicles and environmental factors shrink, and the percentage of human-caused crashes increases commensurately.

Even in its infancy, automated driving has been surprisingly safe. Some studies support the argument that automated driving is currently safer than human driving, while others disagree—and most studies acknowledge that available data is limited (e.g., unreported human-driven crashes, automated driving’s comparatively low miles driven). But one thing remains certain: The technology underpinning automated driving will improve, likely exponentially.

Computers do not share human drivers’ foibles: They cannot be inebriated, they don’t text, and they don’t fall asleep. Automated-driving systems can also have super-human qualities: 360–degree vision; 100 percent alert time; constant communication with the road, traffic lights, and other cars; “sight” through fog and darkness; and universal, system-wide routing for traffic-flow optimization. Computers react faster: Humans’ reaction time is approximately 1.5 seconds, while computers’ reaction times are measured in milliseconds (and, per Moore’s Law, improving exponentially).

Of course, manufacturers and software designers also have good reason to reduce the risk of crashes. Liability concerns incentivize designers to err toward caution, slower speeds, and preventive stops. For example, in 2015 an officer stopped a Google automated vehicle for driving 10 mph below the speed limit. And a popular video purports to show an autopiloted Tesla—traveling at 45 mph on a rainy night—automatically stopping to avoid a vehicle that veered in its path.

In addition, automated-driving systems learn collectively. Human drivers draw from individual experience, often making the same mistake as thousands made earlier. But automated vehicles can automatically incorporate data from errors and driving conditions from millions of miles driven by other automated vehicles. The NHTSA director noted that whenever an automated vehicle encounters an “edge case” for which it was unprepared, “that data can be taken, analyzed, and then the lessons can be shared with more than the rest of that vehicle fleet”—indeed, “all automated vehicles.” As such, automated-driving systems could improve the problem of the 16–year-old human driver making the same mistake made by thousands before. Instead, automated driving can harness the exponential power of crowdsourcing, big data, and Moore’s law to collectively perfect auto safety.

The automotive and insurance industries have identified six possible areas that could slow the advancement of automated driving. The first three obstacles are the most significant: (1) legal liability, (2) policymakers, (3) consumer acceptance, (4) cost, (5) infrastructure, and (6) technology. Of course, the first two obstacles are the primary (if not exclusive) province of lawyers, judges, and government officials. As such, our profession should consider the state of the technology, determining which policies and case law standards (if any) should shift to address this fast-moving development.

Regulations and statutes

Of course, even the safest automated-driving systems will have crashes, caused by either machines or humans, and where there are damages, questions of liability quickly follow. As such, regulation is inevitable. But one scholar cautions against requiring even more of automated systems than our laws currently require of human drivers: “I’m concerned about computer drivers, but I’m terrified about human drivers.”

The question of which entities will primarily regulate automated driving is currently unanswered: Will the applicable statutes and regulations be driven mostly by states, the federal government, or a combination of the two?

Currently, auto insurance is largely self-regulated, and each jurisdiction enacts its own statutes and regulations (roughly split between tort states and no-fault states). But that element of “fault” is premised on the primary cause of fault: humans. Because automated-driving systems will shift that analysis toward manufacturer liability—and because vehicles are products used across state lines—automated-driving regulations may well shift to the federal level. Manufacturers also may welcome federal regulation, which could avoid the costly necessity of complying with the laws of 51 jurisdictions. But the plaintiffs’ bar may well encourage state regulation, permitting injured clients to benefit from a more-favorable state’s laws.

Does automated driving require enabling legislation?

While some states have enacted automated-driving legislation, an untested question is whether automated driving is legally permitted even without legislation. A New York Times op-ed piece in 2011 argued a common perception: “The driverless car is illegal in all 50 states.”11 But legal scholar and automated-driving expert Professor Bryant Walker Smith opines that absent regulation, automated driving may well be permitted under the legal principle that “everything which is not forbidden is allowed.”12 Professor Smith argues that three legal regimes (the 1949 Geneva Convention on Road Traffic, NHTSA regulations, and all states’ vehicle codes) lead to the conclusion that automated driving is likely permitted, even today.

1949 Geneva Convention on Road Traffic: The U.S. adheres to the 1949 Geneva Convention on Road Traffic, which requires signatory countries to enforce certain tenets, including Article 8, which requires that “drivers shall at all times be able to control their vehicles.” This standard is likely satisfied if the system permits human intervention—as under Levels 1–4. But fully autonomous Level 5 may well run afoul of the Geneva provision, unless manufacturers enable human override. An open question: Are fully autonomous Level 5 systems properly considered “drivers”—especially if they perform as well or better than humans?

DOT/NHTSA Regulation: Beyond the Geneva Convention, federal regulators have encouraged automated driving’s adoption. The DOT and NHTSA have released guidance on automated vehicles—most significantly the 2013 Preliminary Statement of Policy Concerning Automated Vehicles (2013 NHTSA Statement),13 which was updated in January 2016 by its Policy Statement of Policy Concerning Automated Vehicles (2016 NHTSA Policy).14 These regulatory statements are the primary indicators of the NHTSA’s nascent policy, which the agency has said will be nimble and responsive.

Even absent automated-driving regulation, the Safety Act and the DOT/NHTSA today permit manufacturers to demonstrate compliance with the Federal Motor Vehicle Safety Standards (FMVSS) through self-certification. This has been the process for prior safety advancements (e.g., airbags), but rapidly evolving automated-driving technology can make even basic assumptions quickly obsolete, requiring regulatory revision.

In February 2014, the federal agencies approved related vehicle-to-vehicle (V2V) communications systems, where vehicles sense each other’s speed and location to help avoid crashes. Similar benefits come from vehicle-to-infrastructure (V2I) sensors, which analyze smart roadways, traffic lights, and other infrastructure. The DOT estimates that V2V will prevent 76 percent of crashes. A provision of 2015’s Fixing America’s Surface Transportation (FAST) Act established a Federal Highway Administration fund to accelerate deployment of automated-driving technologies.15 Legislators are also considering privacy: The Autonomous Vehicle Privacy Protection Act of 2015 would have studied DOT readiness to address the challenges accompanying automated driving, including those to consumer privacy protections. But that bill failed.

In February 2016, the NHTSA responded to a letter from Google requesting interpretation of several of the 2013 NHTSA Policy’s provisions.16 One of the issues, for example, involved the requirement in current regulations that vehicles have a foot-controlled brake, which Google’s fully autonomous design lacks (since Google believes that human controls would actually hinder safety). The NHTSA responded that Google’s design would not comply with the existing regulation’s plain language; that in light of “changed circumstances,” the agency will consider rulemaking to modify the regulations; and that “Google may wish to consider petitioning the agency for an exemption from these provisions.” Industry innovates, and regulators respond.

In regulating the automated-driving space, the NHTSA has said it will move quickly. Responding to public suggestions that the agency may be moving too rapidly, agency head Mark Rosekind noted that the NHTSA has traditionally allowed automakers to introduce new safety improvements (e.g., airbags, electronic stability control, cruise control) quickly, and then after some time, the NHTSA implements new safety standards.17 But where manufacturers’ implementation of earlier safety technology (e.g., airbags) took years, Rosekind noted that improving automated-driving software can take mere minutes—through over-the-air updates. Rosekind, noting that automated driving has the potential to save many lives and that “wait[ing] for perfect” would mean “waiting for a very, very long time,” asked, “How many lives might we be losing while we wait?” Through a “nimble and flexible” approach, the NHTSA seeks to “keep pace with technological innovation” and “provide certainty to manufacturers and developers.”

On September 20, the NHTSA released its newly expanded Federal Automated Vehicles Policy, which serves as “guidance rather than… rulemaking.”18 The agency also promised to “speed the delivery of an initial regulatory framework.” Rosekind has stated that NHTSA regulations will be brief: “[W]e are writing the Declaration of Independence, not the Constitution.” The NHTSA’s newly released policy, most of which became effective immediately, provides flexible measures that give the industry some latitude to deploy automated-driving systems. The policy includes several elements:

Model state policies, including recommendations to states for legislation.

Clarified existing rules and how they apply to automated driving.

New rules and authoritiesthat the NHTSA may consider seekingin the future.

In that document, the federal agencies committed to streamline their review process by issuing “simple” regulatory interpretations in 60 days, and exemption requests in six months. And the DOT will use feedback from the public and from stakeholders, as well as new data, to update the policy “within the next year.”

On the same day as the policy was released, President Obama published an Op-Ed piece on the importance of automated driving in improving safety. Industry’s initial response to the federal regulations has been largely favorable.

State regulations: While the federal government continues its regulatory process, some individual jurisdictions have passed statutes and regulations to define which automated-driving activities are permitted and which are prohibited. In 2011, Nevada became the first state to expressly permit automated driving. Since then, many states have considered similar legislation, but few bills have passed. Stanford University’s Center for Internet and Society tracks state legislative and regulatory action surrounding automated driving,19 and its map is provided here.

Most states either have either enacted or considered laws on automated driving. Several jurisdictions (including California, District of Columbia, Florida, Michigan, Nevada, North Dakota, Tennessee, and Utah) have passed legislation. Broadly, the existing state regulations define autonomous driving, sometimes expressly permitting it and sometimes authorizing studies. The states that have expressly regulated automated driving have permitted it (e.g., California’s statement that it “presently does not prohibit or specifically regulate the operation of autonomous vehicles”), and to date, no states have barred automation.

Manufacturers seeking to comply with a 51–jurisdiction patchwork may have some difficulty, given the potential to run afoul of the most-restrictive jurisdictions. For example, a New York driving statute (enacted in the 1970s) requires a driver to keep “at least one hand” on the steering wheel. (A 2016 bill sought to update the law to permit “driving technology… to perform the steering function,” but the bill died in committee.) And the District of Columbia prohibits distracted driving and mandates a driver’s “full time and attention.”20 Fully autonomous Level 5 vehicles—such as Google’s prototypes, which currently lack traditional steering wheels and are indifferent to driver attention—may not comply with those existing statutes’ plain language.

Federal policymakers have encouraged states to establish “a consistent national framework rather than a patchwork of incompatible laws.” The NHTSA and DOT seek to avoid a future where manufacturers would need to develop “50 different versions” of automated vehicles. To that end, federal regulators have provided states with a Model State Policy intended to guide state lawmakers.21

Minnesota and the Midwest

Minnesota legislators have introduced several bills related to automated driving, but all died in committee. In the 2013 session, the first such bill (HF 1580) sought to “[d]irect[] the commissioner of transportation to ‘evaluate policies and develop a proposal for legislation governing regulation of autonomous vehicles’ by January 31, 2014.”22 But that bill died after its introduction and first reading.

The second attempt came in the 2016 session, when companion bills (SF 2569, HF 3325) sought to create an “autonomous vehicles task force” to “serve mobility needs of people with disabilities.”23 In April 2016, committees in both houses recommend passage as amended, but both bills died before a vote.24

If past attempts are any indication, the Minnesota Legislature appears willing to consider studying automated driving through administrative policy, proposal, or task force, but Minnesota lawmakers do not appear to have an appetite to quickly regulate the issue.

In March 2016, North Dakota became the first neighboring state to act legislatively on automated driving by passing “an act to provide for a legislative management study of automated motor vehicles.” The law asks legislative management to “consider studying what, if any, current laws need to be changed to accommodate the introduction or testing of automated motor vehicles.”25 The direction relates to full automation (Level 5), and it also seeks to study potential effects on safety and wellbeing: “reduc[ing] traffic fatalities and crashes,” “reducing or eliminating driver error,” “reduc[ing] congestion,” and “improv[ing] fuel economy.”26 The bill seeks recommendations and any draft enabling legislation in time for the Legislature’s 2018 session.

In March 2013, Wisconsin legislators introduced a bill (SB 80) to create rules explicitly authorizing “the operation of autonomous vehicles on the highways” if they met particular requirements.27 For example, the bill would have required human presence and intervention, a human driver’s license, $5 million liability-insurance coverage, and other technical requirements. The bill failed to pass the Senate.

In 2014, South Dakota legislators considered a bill to “authorize the testing of autonomous cars on the [state’s] highways.” Senate Bill 139 would have permitted manufacturers to test automated vehicles in the state.28 After three weeks, the bill was tabled and died in committee.

Conclusion (Part One)

Some commentators have suggested that the best approach may be for the NHTSA to create a comprehensive federal regulatory regime (prescribing basic safety criteria) and to permit states to fill any gaps in the regulatory regime through specific statutes and judicial decisions. That method would allow manufacturers to deploy potentially life-saving technology more quickly, while still permitting state legislatures to analyze locally specific use cases and the judiciary to develop the law for less-common edge cases.

Of course, legislation and regulation are only two factors in the legal equation. Courts and litigators must also determine whether and how automated driving will affect liability. As computer algorithms decide vehicle actions, will auto-crash cases shift from pure human negligence to product-liability analyses (such as manufacturer negligence, manufacturing defects, design defects, failure to warn, breach of warranty, and misrepresentation)? Will any liability be limited by end-user license agreements (EULAs)? How will insurers adapt to automated driving’s promised safety benefits? How will law enforcement adjust to automated vehicles’ driving-law adherence, which is likely to limit the use of pretextual stops? If automated vehicles log all routes, how will that affect privacy? And when manufacturers’ algorithms—created months and years in advance—can make driving decisions that will essentially define who lives and who dies in certain circumstances, how should manufacturers make those algorithm-creation choices?

DAMIEN A. RIEHL is a technology lawyer with a background in legal software design. After clerking for the chief judges of the Minnesota Court of Appeals and U.S. District Court in Minnesota, he litigated for a decade with Robins Kaplan. Damien practices in tech law, data privacy (CIPP/US), copyright, trademarks, business torts, breaches of contract, antitrust, financial litigation, and appeals.

Notes

1 This emerging technology has attracted several names: “autonomous vehicles,” “driverless cars,” and “self-driving cars” are a few. Because this article discusses the range of capabilities —from manual vehicles to partial assistance to full autonomy — it will follow the Stanford Center for Internet and Society’s convention by using the term “automated driving.”