Building Ethical AI in Chicago and Beyond

Aug 2, 2018
| Adam J. Hecktman and Soren Spicknall

Fairness, accountability, transparency, and ethics—they’re up to us.

Artificial intelligence, like any major emerging technology, stands to transform how we approach many civic, social, and business challenges. While we’re excited about the opportunities that AI brings to people and its ability to help us achieve more in Chicago and beyond, it’s also important to us that we build upon an ethical foundation.

As representatives of one of the most respected computing companies in the world, it’s our job to instill ethical AI practices in the social impact communities we work with. Our Microsoft Cities Team in Chicago focuses keenly on helping nonprofits, governments, and community activists understand new tech tools that could help forward their mission.

Other local projects are working toward this progress, like the Chicago Data Collaborative, a partnered project of research and advocacy organizations to holistically understand policing data practices—including the use of AI—in order to examine whether such practices are discriminatory. And at events like Re-Imagined Cities, Chi Hack Night, and the Chicago City Data Users Group, our city’s technologists regularly convene to discuss algorithmically aided decision-making and ethical use of public datasets, as well as ethical and equitable tech practices among governments and private entities more broadly.

YouTube Video

Technical progress is exciting, but it brings with it unique issues that must be carefully considered and acted upon to ensure that the benefits of innovation are shared by all. When AI is applied without accounting for social biases and flawed input data, it can produce results that are themselves biased or otherwise flawed.

In our unique role as convener and contributor of information in Chicago’s tech research community, we’re able to advise AI projects as domain experts, directly ensuring that Microsoft’s own published AI solutions are used in measured, responsible ways.

We can share not only our research and technology, but also the best practices that lead innovative and ethical implementations of many aspects of AI. We also provide grants and frameworks to help apply those techniques to areas of social impact including AI for Earth focused on environmental issues, and AI for Accessibility focused on use of AI to build a more inclusive world.

And Microsoft is not alone here. There needs to be some consensus about the values and principles that should govern AI, followed by best practices to implement them. Some early principles are emerging to ensure that the AI systems that impact our lives, our government, and our work are fair, reliable and safe, private and secure, inclusive, transparent and accountable.

The Microsoft Research team FATE (Fairness, Accountability, Transparency, and Ethics in AI ) is dedicated to developing such techniques that are both innovative and ethical, while drawing on the deeper context surrounding these issues from sociology, history, and science and technology studies. Its researchers work on projects and publish papers that address the need for transparency, accountability, and fairness in AI and machine learning systems across a broad array of disciplines.

We’re also working together in a group that includes Amazon, Apple, Facebook, Google, IBM, and academic leaders called The Partnership on AI. As we “invent together,” we can address the important ethical questions more quickly and consistently.

We all must ask: How can we best use AI to assist users and offer enhanced insights, while avoiding exposing them to discrimination in health, housing, law enforcement, and employment? How can we balance the need for efficiency and exploration with fairness and sensitivity to users? And as we move toward relying on intelligent agents in our everyday lives, how do we ensure that individuals and communities can trust these systems?