As government turns to AI to better deliver services to all residents, concerns arise around privacy issues.

This month’s magazine of Government Technology is focused on government’s imperative to operate in a manner that is citizen-centric. Even the back-of-the-house technical staff, whose work may be a few steps removed from front-line service delivery, are making sure systems are able to support and facilitate interactions between citizens and government.

In recent years, interest has grown in the potential of new technologies like artificial intelligence (AI) to help power more effective government. While it was a new concept a few years ago (at least by that name), public-sector chief information officers and their colleagues whose work is dependent on tech now largely agree on its promise.

A precise definition for AI (and what constitutes its use in government) is somewhat elusive, but at a basic level, technology powered by AI is capable of learning patterns and making assumptions based on what it learns and using those assumptions to act, independent of human intervention.

But at its heart, someone is programming those core algorithms, and to many, that’s the scary part. Of course, there’s the potential for a bad actor to corrupt those codes to serve dark purposes — stealing identities en masse, fraud or outright theft — but as many critics of tools like predictive policing have argued, bias, even unintentional bias, has the potential to creep in and affect outcomes in a way that’s at odds with societal values.

New York City Councilmember James Vacca introduced a piece of legislation last December to address some of these concerns. “As we advance into the 21st century, we must ensure our government is not ‘black boxed,’” he said. Vacca’s aim with the bill was “not to prevent city agencies from taking advantage of cutting-edge tools, but to ensure that when they do, they remain accountable to the public,” he said when introducing the bill to the Technology Committee.

New York City uses AI in a number of ways: to make bail determinations, to place students in public schools, and to locate public safety resources like firehouses and patrol officers. Enacted in January, Vacca’s measure provides for a task force to be chosen by Mayor Bill de Blasio with a broad cross-section of interests represented, including those affected by city policies reliant on AI. The group will evaluate the city’s use of the technology to ensure it is administered fairly and fully understood. Among the groups hoping to influence the panel’s membership is the New York branch of the American Civil Liberties Union, who lobbied for the bill.

On the other side of the table during legislative discussions was the New York Police Department, worried that overly cumbersome disclosure requirements would hamper the department’s tactical position, and technology companies protective of the proprietary code in their software.

Rutgers law professor Ellen Goodman suggests that transparency doesn’t need to mean revealing source code, but contracts could be written to ensure the public interest is served, and independent audits should be scheduled regularly.

“Propriety interests should not tromp on public access,” she told Government Technology.

While opinions vary as to how valid these concerns from industry are, surely a compromise that serves the public good and instills trust in tech-driven decision-making will help ensure the public sector can realize the benefits of artificial intelligence and other emerging technologies.

New York City’s handling of the issue could give other jurisdictions a model to emulate.