A short op-ed in the Courant from Bloomberg View, by Cathy O’Neil describes the risks of artificial intelligence algorithms used by the likes of Facebook and Google: Controlling A Pervasive Use Of Algorithms Critical <read>

Humans are gradually coming to recognize the vast influence that artificial intelligence will have on society. What we need to think about more, though, is how to hold it accountable to the people whose lives it will change…

In short, people are being kept in the dark about how widely artificial intelligence is used, the extent to which it actually affects them and the ways in which it may be flawed. That’s unacceptable. At the very least, some basic information should be made publicly available:

Scale: Whose data is collected, how, and why? How reliable are those data? What are the known flaws and omissions?

Impact: How does the algorithm process the data? How are the results of its decisions used?

Accuracy: How often does the algorithm make mistakes — say, by wrongly identifying people as criminals or failing to identify them as criminals? What is the breakdown of errors by race and gender?

Such accountability is particularly important for government entities that have the power to restrict our liberty. If their processes are opaque and unaccountable, we risk handing our rights to a flawed machine.

We should have concerns with algorithms beyond Artificial Intelligence. The same concerns apply to any algorithm (computer code/manual process), such as voting machines. We have no access to the code in our AccuVoteOS optical scanners. Yet we know from studies such as the California Top-To-Bottom-Review, Hacking Democracy’s Hursti Hack, and studies by UConn that the system is vulnerable to attack. We do not know and cannot know for sure if the software running on a particular AccuVoteOS and its memory card is correct and accurate.

The best defense is a comprehensive, sufficient Post Election Audit.

Yet now Connecticut audits with the UConn Audit Station with undisclosed software. Even if we knew the software and tested it, there still would be no assurance that we missed something in our tests, there was a hardware error, or the software was compromised. As we wrote in a recent op-ed in the CTMirror and as covered in the most recent Citizen Audit Report the only defense is a manual audit of that Audit Station, every time it is used.

Covering the items in the Courant Op-Ed:

Scale: Whose data is collected, how, and why? How reliable are those data? What are the known flaws and omissions?
Our voting data is collected. It is only as reliable as the optical scanner as deployed. The system is vulnerable to attack in a variety of ways.

Impact: How does the algorithm process the data? How are the results of its decisions used?
It is supposed to be a straight-forward interpretation of marks to vote counts, provided the scanner is properly configured and programmed. The results are used to determine who leads our democracy AND if our votes actually determine that.

Accuracy: How often does the algorithm make mistakes — say, by wrongly identifying people as criminals or failing to identify them as criminals? What is the breakdown of errors by race and gender?
Here this is, how often does in inaccurately count votes? Did it count them accurately enough in this election? If we had trustworthy audits of the voting machines and the Audit Station we could answer these questions.