This motto illustrates Silicon Valley’s approach to product launches: a company will typically move a “minimum viable product” (or MVP) to market, testing and adjusting as necessary along the way. It is emblematic of the tech community’s drive to improve every facet of human life. Now that drive is leading Silicon Valley’s software developers and thinkers to turn to artificial intelligence (AI).

This technology—which at this stage in its development is often called machine learning or predictive analytics—already touches on the life of every consumer in today’s digital marketplace.

But Silicon Valley’s approach can be at odds with the interests of regulators and governments, who prefer to move slowly, test extensively, and ensure the products used by Canadians are safe and reliable.

The collision of the Valley’s product ethos and regulators can be seen in ride-sharing giant Uber’s rollout. Challenging the regulatory dominance of the taxi industry, Uber’s "guerilla" approach to product launches has led to a regulatory patchwork and significant legal uncertainty.

Silicon Valley’s approach can be at odds with the interests of regulators and governments, who prefer to move slowly, test extensively, and ensure the products used by Canadians are safe and reliable.

But as artificial intelligence technologies increasingly move from cloud- or handheld-based apps into the physical world—typified by Uber’s drive to develop wholly self-driving cars—the risk increases, as people can be hurt or killed by malfunctioning physical machines. This greater degree of risk demands regulators confront the thorny legal and regulatory issues AI technology poses.

These risks are present across other applications of AI technology, but self-driving cars provide an instructive example.

Product vs Individual Liability

The present model of automobile liability is individual-focused, as cars continue to largely be under the control of individual drivers.

As self-driving car technology improves and cars become more and more autonomous, liability is likely to shift to product liability.

This obviously poses risks to manufacturers, but there are also legal implications:

Manufacturer Duty of Care – Manufacturers of consumer products have a duty to the end-user of their products to ensure that the products are safe to use and that risks are clearly indicated so that consumers are able to make a fully-informed decision.

The manufacturer duty of care is well-understood for common consumer goods, but self-driving cars pose challenges. They are both a manufactured product and a platform for software, some of which may be developed by third parties. Both of these providers are likely to be found to have a duty of care if and when self-driving car accidents cases come before the courts.

Government Regulatory Bodies – Car safety is regulated by Transport Canada, and consumer product safety is regulated by Health Canada. Both of these organizations are well-equipped to gauge the physical safety of self-driving automobiles. But in addition to these agencies, the government must invest in the technologically-savvy talent necessary to gauge the safety of the machine-learning algorithms, and data-crunching techniques, used to guide self-driving cars safely to their destination.

Data Privacy and Ownership

A self-driving car is a software platform in constant communication with central servers as well as the self-driving cars around it. Imagine a two-ton mobile version of your smartphone in constant communication with the world around it. In order to ensure safe passage to its destination, the car is equipped with multiple sensors, and this sensory data is sent back to central servers. Combined with data from other cars, this allows machine-learning algorithms to safely plot a course to the rider’s destination.

There are not only serious privacy implications to the collection and retention of so much data, but also ownership and profitability questions.

This requires a tremendous amount of data, much of which may be sensitive or at least qualifies as “personal information” under existing privacy laws, and the totality of which paint a detailed picture of a passenger’s daily activities. There are not only serious privacy implications to the collection and retention of so much data, but also ownership and profitability questions. Much like web traffic data is used today, self-driving car usage history could be used to develop very detailed user profiles, which would be very useful to firms looking to target advertisements or sell individualized products.

Collection of web traffic data is legal under the principle of implied consent: by using a website, a user implicitly gives permission for their data to be collected and monetized for reasonable purposes disclosed by the website.

But when self-driving cars become common, courts and regulators will have to answer the question: how can a user meaningfully understand and consent to the collection and use of the extensive data generated by self-driving cars? And if a user wants to stop sharing travel data, manufacturers will have to answer whether that choice can be respected without interfering with the operation of the vehicle itself.

A Viable Product

Silicon Valley’s habit of sending MVPs to market allows them to innovate and create at a speed unmatched by any other industry.

But innovation must be matched with a regulatory apparatus that will enable this technology to be applied safety. Not only will this protect consumers, but it will enhance Canada’s growing reputation as an incubator of new technology.

This technology is coming. Regulators must act to get ahead of it, anticipating what structures must be in place to ensure we are able to take advantage of the opportunities AI affords while minimizing the risks. To “move fast and break things” may work in cyberspace, but not on Canadian streets.