How Good Are Google's New AI Ethics Principles?

Today Google released a new set of AI ethics principles, which were prompted, at least in part, by the controversy over the company's work on the US military's Project Maven. This post contains some quick preliminary analysis on the strengths and weaknesses of those principles.

On many fronts, the principles are well thought-out and promising. With some caveats, and recognizing that the proof will be in their application by Google, we recommend that other tech companies consider adopting similar guidelines for their AI work. But we do also have some concerns that we recommend Google and other tech companies address:

One concern is that Google hasn't committed to the type of independent, informed and transparent review which would be ideal for ensuring the principles are always applied and applied well. Without that, the public will have to rely on the company's internal, secret processes to ensure that these guidelines are followed. That's a common (and generally unfortunate) pattern in corporate governance and social accountability, but there's an argument that AI ethics is so important and the stakes can be so high, that there should be independent review as well, with at least some public accountability.

Another concern is that by relying on “widely accepted principles of international law and human rights” for the purposes that Google will not pursue, the company is potentially sidestepping some harder questions. It is not at all settled — at least in terms of international agreements and similar law — how many key international law and human rights principles should be applied to various AI technologies and applications. This lack of clarity is one of the key reasons that we and others have called on companies like Google to think so hard about their role in developing and deploying AI technologies, especially in military contexts. Google and other companies developing and deploying AI need not only to follow “widely accepted principles” but to take the lead in articulating where, how and why their work is consistent with principles of international law and human rights.

On surveillance, however, we do have some specifics for Google and other companies to follow. Google has so far constrained itself to only assisting AI surveillance projects that don't violate internationally accepted norms. We want to hear clearly that those include the Necessary and Proportionate Principles, and not merely the prevailing practice of many countries spying on the citizens of almost every other country. In fact, in the light of this practice, it would be better if Google tried to avoid building AI-assisted surveillance systems altogether.

We hope Google will consider addressing these issues with their principles. There may be other issues that come to light with further analysis. But beyond that, we think this is a good first step by the company, and with some improvements on these fronts, could become an excellent model for AI ethics guidelines across the tech industry. And we're ready to hear from the rest of that industry that they too are stepping up.

Spanish version San Francisco—The Electronic Frontier Foundation (EFF) and more than 70 human and digital rights groups called on Mark Zuckerberg today to add real transparency and accountability to Facebook’s content removal process. Specifically, the groups demand that Facebook clearly explain how much content it removes, both rightly...

You shouldn’t be convicted by secret evidence in a functional democracy. So when the government uses forensic software to investigate and build its case in a criminal prosecution, it should not hide that technological evidence from the defense. In an amicus brief filed today EFF urged the Ninth Circuit...

Employees at Google, Microsoft, and Amazon have raised public concerns about those companies assisting U.S. military, law enforcement, and the Immigration and Customs Enforcement Agency (ICE) in deploying various kinds of surveillance technologies. These public calls from employees raise important questions: what steps should a...

The current European Digital Single Market copyright negotiations involve more than just the terrible upload filter and link tax proposals that have caused so much concern—and not all of the other provisions under negotiation are harmful. We haven't said much about the text and data mining provisions that...

EFF, together with 41 national, state, and local civil rights and civil liberties groups, sent a letter today urging the ethics board of police technology and weapons developer Axon to hold the company accountable to the communities its products impact—and to itself. Axon, based in Scottsdale, Arizona, is responsible for...

Yesterday and today, Mark Zuckerberg finallytestified before the Senate and House, facing Congress for the first time to discuss data privacy in the wake of the Cambridge Analytica scandal. As we predicted, Congress didn’t stick to Cambridge Analytica. Congress also grilled Zuckerberg on content...

Thousands of Google staff have been speaking out against the company’s work for “Project Maven,” according to a New York Times report this week. The program is a U.S. Department of Defense (DoD) initiative to deploy machine learning for military purposes. There was a small amount of ...