Josh Pachter ’18, from Lexington, Massachusetts, is graduating as a computer science major with a minor in philosophy. After an internship with Amazon, he was hired before returning to school and will begin work this September as a software development engineer. (University of Rochester photo / Steve Dow)

Making their mark: This is one in a series of profiles celebrating members of Rochester’s graduating class of 2018.

Josh Pachter ’18 knew he wanted to study engineering in college, but he also wanted the freedom to explore other fields.

“A lot of engineering schools pigeon hole you,” he says. “I didn’t feel that way when I came to Rochester because of its open curriculum. I became a computer science major with a philosophy minor, and I was able to integrate my fields of study.”

That was illustrated this year, when Pachter took part in the Senior Scholars Research Program. In a project involving self-driving cars, Pachter addressed some timely, practical—and philosophical—questions: Can we train machines to act ethically? And if so, how?

“It’s not a conventional research project done in a lab,” says the the Lexington, Massachusetts, native, who graduates Sunday. “It’s a combination of philosophy and computer science—lots of literature review and philosophizing.”

Pachter’s advisor, Hayley Clatterbuck, an assistant professor of philosophy, says Pachter was able to apply his deep knowledge of computer science and philosophy to synthezie complex theories in both fields and generate “fascinating” results.

“His project truly embodied the promise and necessity of interdisciplinary work,” she says. “If we use machine learning to train autonomous vehicles, which machine-learning approaches should we use and on which data should we train them? Josh examined various cutting-edge machine-learning processes to determine which problems they are most apt to solve. Then, he considered what kind of problem morality is, a surprisingly complicated topic that raises many important questions.”

As Pachter found, machine learning is plagued by forms of bias when programmed by humans who may have different moral frameworks. Examples of bias include hugging the side of the road too closely or choosing to run over one group of humans rather than another because of some arbitrary factor.

What’s important to one may be less important to another, and the results could be catastrophic.

“If our phone doesn’t work the way we want, we’re inconvenienced, but we’re probably not going to die,” Pachter says. “The implications are far higher with autonomous vehicles.”

Pachter’s idea is to create ethical machines through a process similar to how we raise children.

“If we expect autonomous cars to drive better and safer than we do, we should provide the groundwork,” he says. “We can provide some fundamental moral truths and through a training process, the machine will ultimately learn to make good higher-level decisions without the need for bad input from its parents—humans—who are actually bad drivers.”

The best example? Don’t run into people.

At the end of the training, Pachter says simulations must be run to make sure the system makes decisions that are “socially permissible and can be accepted as morally legitimate by potential users.”

“Since we haven’t told the machine how to behave explicity,” he adds, “this testing is crucial to ensure that the rules it taught itself align well with our values.”

Pachter’s research experience includes a summer at Harvard University’s Wyss Institute for Biologically Inspired Engineering, where he worked on a soft robotic glove aimed at restoring the ability to grip to stroke victims and muscular dystrophy patients. He also worked in the University of Rochester’s Human Computer Interaction Lab working on a virtual conversation coach that provides live feedback on how we are speaking.

Last summer, Pachter landed an internship with Amazon at their Seattle headquarters. He was hired before returning to school and will begin work this September as a software development engineer.