If a driverless car is hurtling towards a pedestrian and has the option of swerving out of the way and killing the passenger, what should it do?

What if there are two passengers and only one pedestrian? What if the pedestrian is a child? It’s a twist on the Philosophy 101 trolley problem, but it’s a dilemma that driverless cars may one day encounter.
In an attempt at creating a moral framework for these decisions, MIT researchers set up a site called the Moral Machine, where people could decide who lives or decides in theoretical driverless car accident scenarios. In partnership with researchers at Carnegie Mellon University, those MIT researchers took the subsequent data and created an artificial intelligence that could learn from these results and make similar ethical decisions.

But is crowdsourcing morality the best way to create an ethical guideline for driverless cars? Or is this an example of tyranny of the majority? How should we code the morality of driverless cars?