As we begin to embed ethical principles into machines and software that can make decisions, we need to examine the use cases involved. The ethical codes needed in a powerful and dangerous machine that can easily cause life threatening damage are quite different from those needed in a software agent that makes grocery purchasing choices based on the environmental philosophy of the operator.

In computers we use ROM (Read Only Memory) firmware to store critical bootstrap programs that get the system up and running and establish basic operating routines similar to human autonomic functions like breathing and pumping blood. These functions are designed to be tamper resistant if not tamper proof and are not easy to change.

We’ll need to adapt the same kind of thinking to create different levels of guidelines or principles that underlie ethical analysis for different classes of scenarios. We consider moral codes to be the basic ethical principles that are applicable to nearly all scenarios and perhaps inviolable. But even core morals like: “don’t kill, don’t harm,don’t lie, don’t steal” can be challenged by extreme situations.

The need for different levels may become apparent first in autonomous vehicles. It’s obvious that they need basic codes to prevent accidents and minimize damage to both property and life. But the decision of how to balance fuel efficiency against urgency might be tweakable by the human passenger.