Good Systems

Ethics, Values, and A.I.

“Technology is neither good nor bad; nor is it neutral.” This is the first law of technology, outlined by historian Melvin Kranzberg in 1985. It means that technology is only good or bad if we perceive it to be that way based on our own value system. At the same time, because the people who design technology value some things more or less than others, their values influence their designs.

We use that technology — and, increasingly, artificial intelligence — to entertain us, communicate, get places faster, make predictions, swipe left or right, protect our homes, solve complex problems quickly and easily. In short, A.I. is changing the way we do everything because it’s everywhere — from dating apps to the most advanced military technology.

But because technology is never neutral, it has the capacity to be harmful to us in ways we might not intend or predict. The difficulty for us, as scientists and engineers, is that A.I. is helpful.

It can do many things faster, better, and easier than humans, and humans reap the rewards. But how will A.I. affect society, work, and how we interact with others? We need to answer these questions proactively rather than waiting for bad things to happen and reacting after it’s already too late.

In the words of Michael Crichton’s “Jurassic Park” mathematician, “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think about if they should.”

Can We Ensure that A.I. Protects Humanity, not Destroys it?

That’s the question we have to ask now: Should we? How can we ensure that advances to A.I. are beneficial to humanity, not detrimental? How can we develop technology that makes life better for all of us, not just some? What unintended consequences are we overlooking or ignoring by developing technology that has the power to be manipulated and misused, from undermining elections to exacerbating racial inequality?

Our goal is to provide a way for prosocial values to drive the design of artificial intelligence in autonomous and semi-autonomous technologies so that those systems both protect and improve society.

“Your scientists were so preoccupied with whether or not they could, they didn’t stop to think about if they should.”

— Ian Malcolm, “Jurassic Park”

YEAR ZERO

This marks our development year as a future UT grand challenge. Our focus during “year zero” of this 8-year research project is to develop what we’re calling the Good Systems Values Networks Method and grow our own network of colleagues, partners, and supporters as well.

Our proposed Values Networks Method connects VSD (on the microscale) and STINs (on the macroscale) to forge a novel research approach that will:

Build collaborations

among humanists, social scientists, and technologists, who will combine conceptual, empirical, and technical investigations

Connect collaborations

into broader and larger values networks that consider diverse values that should (or should not) be built into A.I. systems

Highlight values

that individuals consider important in life, with an emphasis on prosocial values like democracy, fairness, transparency, and agency

Meet the Team

Research groups around the world are asking similar questions about A.I., but their background traditionally focuses on computer science. Our grand challenge team is composed of computer scientists as well as natural and social scientists, technologists, ethicists, engineers, health and transportation experts, and more.

Yan Zhang

Follow us

News & Events

Designing Good AI+Human Hybrid Systems to Curb Misinformation

At this hackathon we will tackle the problem: what novel systems might we design to help curb the rise of online misinformation and disinformation? We will also expand scope from purely automated AI systems to also considering hybrid AI+human systems, and what it means for such hybrid systems to be “good.”

The Ethical Operating System: How Not to Regret the Things You Build

Join Sam Woolley from Institute of the Future at UT’s Digital Media Speaker Series this month. The current wave of computational propaganda has taken the world by surprise. Technology firms, policy makers, journalists and the general public are scrambling to respond to the societal threats posed by disinformation and politically motivated trolling. This talk outlines one method for responding to these issues: the Ethical Operating System (ethicalOS.org), a toolkit for anticipating future uses of technology. Jane McGonigal and Samuel Woolley, with support from Omidyar Network, constructed this guide to help a wide variety of groups think about how to design technology with democracy and human rights in mind. The toolkit has been used by major companies in Silicon Valley, by legislators at the state and federal level and by students in Stanford’s design school and intro to computer science courses. It’s time, however, to put into the hands of the U.S. public so that they can help in the fight against disinformation and manipulative technology.

Can Trump’s New Initiative Make American AI Great Again?

“When developing policy guidelines and regulation, it is critically important to separate these various technologies and applications so as to deal with them individually. Any effort to consider all of AI as one unit when developing policies and initiatives would be very misguided.” — Peter Stone, Department of Computer Science professor and Good Systems founding researcher

Newsletter: Hacking Open Data to Improve Our Cities

In October, Good Systems team members hosted a Good Systems 311 Calls and 500 City Hackathon.UT students used A.I. and machine learning methods to analyze large scale data sets of 311 calls, which log resident complaints, concerns, and non-emergency problems. This is valuable information that, when examined on aggregate, can help inform local decision-makers and city planners.

Newsletter: Good Systems Update and Hackathon

The Good Systems development year is off to a running start! Thank you to everyone who has made our first two events a success. You have shown drive an initiative in this new UT Grand Challenge from Bridging Barriers and we hope to keep that enthusiasm going throughout the year.

New Bridging Barriers Themes in Development Announced

Vice President for Research Dan Jaffe introduces Planet Texas 2050 and new projects in development that could one day become grand challenges at The University of Texas at Austin.

Please Join Us on This Journey

2018 marks the beginning of our development year. This is the time when we grow our team, ask hard questions (then, even harder ones), and decide how to design our work over the next decade. We welcome and value your feedback, your thoughts, and your contributions.

Please follow us on social media, meet us at our events, and let us know why you think this is a grand challenge.

We need your support

We’re committed to harnessing the talent, passion, and expertise of researchers from all disciplines at UT to tackle some of humanity’s biggest challenges.