The EU Should Not Regulate Artificial Intelligence As A Separate Technology

The vehicle for this regulatory effort seems to be the draft Ethics Guidelines developed by a high-level expert group. The comment period on this draft closed on February 1, and a final report is due in March. This report will be enormously influential in setting the tone and direction for global artificial intelligence (AI) regulation.

The EU is not alone in aiming to establish guidelines for AI. The OECD is working on a similar project. Meetings of the G7 and G20 have featured proposals from Japan and others for an international code of conduct for the development and use of AI.

Industry has been involved in this project as well. The industry trade associations SIIA and ITI have both released proposed guidelines for AI. Individual companies including Google, Microsoft and Facebook have developed their own public standards for their use of AI-systems.

What is the proper role of these guidelines for AI? The only sensible approach is not to regulate AI as such, but to see where AI raises new issues in particular contexts that require new domain-specific rules.

The best way to think about general guidelines for AI is as assessment tools, checklists for developers and users to ensure that their systems stay within appropriate ethical guardrails. SIIA’s principles, for instance, call for companies to evaluate whether their data and data analytics practices are consistent with universal human rights, whether they tend to promote human welfare and whether they help people develop and maintain virtuous character traits. Companies should also have policies and procedures in place to provide for transparency, explainability and fairness. In particular, in cases where AI-systems are used for consequential decisions that affect important aspects of a person’s life, companies should conduct disparate impact analyses to ensure that these uses do not have an unjustified, disproportionate adverse impact on vulnerable populations.

But these general principles don’t really give guidance for how companies ought to behave in concrete situations. Take transparency for instance. Revealing the source code of a program that evaluates public school teachers for a job or assesses crime scene evidence for indications of a DNA match might be needed to protect constitutional rights to due process. But, revealing the IRS algorithm for detecting fraud in tax returns or disclosing machine learning programs that turn intelligence material into actionable insights for national security officials would just allow gaming of the system. A general rule that requires source code disclosure would be right in the some cases and a disaster in other cases.

ARTICLE CONTINUES AFTER ADVERTISEMENT

In a similar way, a general rule that machine learning programs must be explainable or they should not be used would be just the right thing in some cases and a serious misstep in others. It is probably a good idea for nurses to take preventive actions based on the discovery that vital signs in premature babies become unusually stable twenty-four hours before the onset of a life-threatening fever, even though no one has a good understanding of the causal mechanism involved. But, a correlation that emerges inexplicably from a machine learning program suggesting that asthma patients with pneumonia are at lower risk of death than other patients should not be used to make hospitalization decisions.

Sometimes it is not clear yet whether intelligible explanations should be required or not. For instance, U.S. regulations require credit card companies to provide the reasons for an adverse action. But this might not be possible for a new machine learning program incorporating thousands of non-traditional variables interacting in complex, inscrutable ways. If the new algorithm is vastly more accurate than the older ones and can detect credit worthy people who were completely missed with the less accurate scores, should this increase in the availability of credit offset the explanation requirement?

Regulators are already tackling these issues. The Consumer Financial Protection Bureau is assessing the regulatory challenges created by alternative data and alternative analytic techniques. The Food and Drug Administration is assessing what rules should apply to machine learning clinical decision software. The Defense Department’s current policy for the deployment of autonomous weapons systems requires that humans maintain “meaningful” control, and ethical discussions of fully autonomous systems focus on whether they could ever successfully mimic the battlefield decisions of a reasonable commander.

The key issues under discussion are all sector-specific.

Who should be liable for accidents and injuries involving autonomous cars? Should we follow the credit card model of assigning responsibility to some actors and protecting others or should we allow market players to sort it out through contracts?

ARTICLE CONTINUES AFTER ADVERTISEMENT

Should the anti-discrimination laws be reformed to clarify that targeted ads involving ethnic affiliation are not permitted in areas of eligibility determination like employment, housing, credit and insurance? Should these laws be adjusted to protect vulnerable groups in new areas of concern such as the delivery of search results?

The good news is that the EU’s high-level expert group seems to recognize that “the specific context needs to be taken into account” in applying the guidelines. They conclude that:

“While the Guidelines’ scope covers AI applications in general, it should be borne in mind that different situations raise different challenges. AI systems recommending songs to citizens do not raise the same sensitivities as AI systems recommending a critical medical treatment…It is, therefore, explicitly acknowledged that a tailored approach is needed given AI’s context-specificity.”

As a result, the group intends to produce specific recommendations for four particular use cases: healthcare, autonomous driving, insurance premiums and profiling in law enforcement.

This focus on domain-specificity dovetails with the conclusion of the AI Study Group in 2016:

ARTICLE CONTINUES AFTER ADVERTISEMENT

“…attempts to regulate “AI” in general would be misguided, since there is no clear definition of AI (it isn’t any one thing), and the risks and considerations are very different in different domains. Instead, policymakers should recognize that to varying degrees and over time, various industries will need distinct, appropriate, regulations that touch on software built using AI or incorporating AI in some way.”

This focus on context is the right direction. But it hasn’t been confirmed by decision-makers yet. As EU policymakers move toward AI regulation, they should make it clear that general AI guidelines are just elements to consider for appropriateness in a context, rather than requirements that must be implemented uniformly in all contexts.

A report from the recent conference on Computers, Privacy and Data Protection suggested that the European Commission is “considering the possibility of legislating for Artificial Intelligence.” Karolina Mojzesowicz, Deputy Head, Data Protection Unit at the European Commission, said that the Commission is “assessing whether national and EU frameworks are fit for purpose for the new challenges.” The Commission is exploring, for instance, whether to specify “how big a margin of error is acceptable in automated decisions and machine learning.”

The vehicle for this regulatory effort seems to be the draft Ethics Guidelines developed by a high-level expert group. The comment period on this draft closed on February 1, and a final report is due in March. This report will be enormously influential in setting the tone and direction for global artificial intelligence (AI) regulation.

The EU is not alone in aiming to establish guidelines for AI. The OECD is working on a similar project. Meetings of the G7 and G20 have featured proposals from Japan and others for an international code of conduct for the development and use of AI.

Industry has been involved in this project as well. The industry trade associations SIIA and ITI have both released proposed guidelines for AI. Individual companies including Google, Microsoft and Facebook have developed their own public standards for their use of AI-systems.

What is the proper role of these guidelines for AI? The only sensible approach is not to regulate AI as such, but to see where AI raises new issues in particular contexts that require new domain-specific rules.

The best way to think about general guidelines for AI is as assessment tools, checklists for developers and users to ensure that their systems stay within appropriate ethical guardrails. SIIA’s principles, for instance, call for companies to evaluate whether their data and data analytics practices are consistent with universal human rights, whether they tend to promote human welfare and whether they help people develop and maintain virtuous character traits. Companies should also have policies and procedures in place to provide for transparency, explainability and fairness. In particular, in cases where AI-systems are used for consequential decisions that affect important aspects of a person’s life, companies should conduct disparate impact analyses to ensure that these uses do not have an unjustified, disproportionate adverse impact on vulnerable populations.

But these general principles don’t really give guidance for how companies ought to behave in concrete situations. Take transparency for instance. Revealing the source code of a program that evaluates public school teachers for a job or assesses crime scene evidence for indications of a DNA match might be needed to protect constitutional rights to due process. But, revealing the IRS algorithm for detecting fraud in tax returns or disclosing machine learning programs that turn intelligence material into actionable insights for national security officials would just allow gaming of the system. A general rule that requires source code disclosure would be right in the some cases and a disaster in other cases.

In a similar way, a general rule that machine learning programs must be explainable or they should not be used would be just the right thing in some cases and a serious misstep in others. It is probably a good idea for nurses to take preventive actions based on the discovery that vital signs in premature babies become unusually stable twenty-four hours before the onset of a life-threatening fever, even though no one has a good understanding of the causal mechanism involved. But, a correlation that emerges inexplicably from a machine learning program suggesting that asthma patients with pneumonia are at lower risk of death than other patients should not be used to make hospitalization decisions.

Sometimes it is not clear yet whether intelligible explanations should be required or not. For instance, U.S. regulations require credit card companies to provide the reasons for an adverse action. But this might not be possible for a new machine learning program incorporating thousands of non-traditional variables interacting in complex, inscrutable ways. If the new algorithm is vastly more accurate than the older ones and can detect credit worthy people who were completely missed with the less accurate scores, should this increase in the availability of credit offset the explanation requirement?

Regulators are already tackling these issues. The Consumer Financial Protection Bureau is assessing the regulatory challenges created by alternative data and alternative analytic techniques. The Food and Drug Administration is assessing what rules should apply to machine learning clinical decision software. The Defense Department’s current policy for the deployment of autonomous weapons systems requires that humans maintain “meaningful” control, and ethical discussions of fully autonomous systems focus on whether they could ever successfully mimic the battlefield decisions of a reasonable commander.

The key issues under discussion are all sector-specific.

Who should be liable for accidents and injuries involving autonomous cars? Should we follow the credit card model of assigning responsibility to some actors and protecting others or should we allow market players to sort it out through contracts?

Should the anti-discrimination laws be reformed to clarify that targeted ads involving ethnic affiliation are not permitted in areas of eligibility determination like employment, housing, credit and insurance? Should these laws be adjusted to protect vulnerable groups in new areas of concern such as the delivery of search results?

The good news is that the EU’s high-level expert group seems to recognize that “the specific context needs to be taken into account” in applying the guidelines. They conclude that:

“While the Guidelines’ scope covers AI applications in general, it should be borne in mind that different situations raise different challenges. AI systems recommending songs to citizens do not raise the same sensitivities as AI systems recommending a critical medical treatment…It is, therefore, explicitly acknowledged that a tailored approach is needed given AI’s context-specificity.”

As a result, the group intends to produce specific recommendations for four particular use cases: healthcare, autonomous driving, insurance premiums and profiling in law enforcement.

This focus on domain-specificity dovetails with the conclusion of the AI Study Group in 2016:

“…attempts to regulate “AI” in general would be misguided, since there is no clear definition of AI (it isn’t any one thing), and the risks and considerations are very different in different domains. Instead, policymakers should recognize that to varying degrees and over time, various industries will need distinct, appropriate, regulations that touch on software built using AI or incorporating AI in some way.”

This focus on context is the right direction. But it hasn’t been confirmed by decision-makers yet. As EU policymakers move toward AI regulation, they should make it clear that general AI guidelines are just elements to consider for appropriateness in a context, rather than requirements that must be implemented uniformly in all contexts.