Ethics of Autonomous Technology

Are Machines Ethical?

Does the question of whether machines are ethical even make sense? Can a machine have ethics? From phones to cars to home-automated devices, as our technology becomes increasingly “smart” and capable of managing many of our daily tasks for us, these questions begin to become more relevant.

Humans are clearly capable of ethical behavior. As such, each of our actions is performed within the context of what we consider safe, fair, just, and ethical. But as new forms of technology take over these tasks, do they (or even can they) exhibit the same level of ethical awareness that we demonstrate ourselves?

The Trolley Problem

Several decades ago, philosophers proposed a thought experiment, known as the trolley problem, that has become extremely relevant to computer scientists and legislators as the reality of practical self-driving vehicles has become ever more present.

The Trolley Problem

You witness a runaway trolley (i.e., a streetcar) barreling down the street. In its path, you notice five people stuck on the tracks and unable to move out of the way before they are struck by the trolley.

Fortunately, you have time to pull a lever that will redirect the trolley onto a parallel track where there is only one person in its path.

You have two options:
  1. Do nothing and allow the five people to be killed.
  2. Pull the lever and allow the one person to be killed.
Which choice is the correct choice?

Most people tend to agree that pulling the lever is the better choice since it lessens the loss of life from five victims to only one. However, an alternate scenario that also results in only one loss of life is not quite so clear-cut. Instead of a lever and a parallel track, consider the following variation:

The Fat Man Variation

Again, there is a runaway trolley bearing down on five inevitable victims.

This time, however, you notice a very fat man standing next to the tracks. You quickly conclude that shoving the fat man into the path of the speeding trolley will derail and stop the trolley while killing the fat man, but save the five people farther down the track.

You have two options:
  1. Do nothing and allow the five people to be killed.
  2. Push the fat man onto the track causing him to be killed.
Now which choice is the correct choice?

How is this scenario different from the first? And why do so many people say that they could not sacrifice the fat man while they were perfectly willing to sacrifice the sole person standing on the parallel track? It is quite a dilemma and shows just how difficult it can be to know what the right solution is in various scenarios.

In these hypothetical thought experiments, it may be easy to dismiss the ethical challenges because the situations are not real and there are no real victims, only imaginary victims. But with the advent of autonomous vehicles, the potential victims are very real and the programmers who design the algorithms that dictate how a car chooses between different options must deal with these very real issues. The code that they write might literally mean the difference between life and death.

Whose Fault Is an Accident?

Accidents happen. Nobody wants, expects, or intends for them to happen, but they do happen nonetheless. In the case of human-driven vehicles, years of precedence allow us to assign liability to the appropriate driver(s) who are responsible for the action. It is usually quite straightforward. If car A causes the accident with car B, then the driver of car A is clearly at fault.

However, with autonomous vehicles, this question becomes a bit more nebulous. If self-driving car A causes an accident, who is the driver of record? The car itself? The owner of the car that caused the accident? The passenger(s) of the car who instructed the car to drive them through that intersection? The carmaker that designed and sold the car as a safe vehicle? The team of programmers whose code made the decision that led to the accident? Or is the responsibility shared between some number of these parties? And if so, which parties and in which proportions?

These are not easy questions to answer and the laws regarding autonomous vehicles (or other autonomous devices) have traditionally not kept up with the pace of new technologies. Inevitably, the necessary laws will be written and many of these questions will be resolved, as they have with other technological advances of the past. But for now, things are anything but clear.

But this issue does introduce a fascinating complication of any technological advancement that impacts the full spectrum of individuals involved in the application of new technology. At the coding and design end of the spectrum, programmers and engineers need to be well-versed in ethics and the law, especially as they attempt to model such behavior in the decision-making algorithms of their devices. And on the opposite end of the spectrum, lawmakers and legal authorities need to have a solid understanding of technology and its capabilities and limitations in order to write sound and enforcible legislation.


As legislators draft new laws and automakers develop new autonomous vehicle technologies, both groups often rely upon a consensus of expert opinions with regard to the ethics and responsibilities of these new innovations. They use these agreed-upon sets of policies and standards to shape and guide their development of new technologies and the laws that apply to them.

Draft a set of policies for autonomous vehicles that address situations like those described in the “trolley problem” or the video above.

As a class, you will develop a set of policies that should be clear and detailed enough that lawmakers could use them as the basis for drafting new legislation and carmakers could use them as guides for developing new self-driving algorithms for their automobiles.

Initially, as an entire class, you will discuss the issues related to autonomous vehicles and brainstorm ideas and the goals that you wish your policies to achieve. Next, you will divide into small groups, with each group focusing on the development of one or two policy ideas in detail. Finally, the groups will share their results with the full class, who will then discuss, revise, and select the various policy ideas that will make up the standards for the class’ overall recommended autonomous vehicle policy.

As you develop and discuss your new policies, you should make sure to generalize your definitions such that they can apply to a broad variety of scenarios. It would be helpful to consider examples of similar scenarios to which your policy should apply and those to which it should not apply. Your policies should include the following elements:

  • Use clear and precise language.
  • Clearly describe the conditions/scenarios to which the policy applies and does not apply.
  • Clearly state how difficult decisions, such as in the “trolley problem,” should be made according to your policy.
  • Focus on how to determine what the “correct” decision is and how that decision might be enforced, both in law and in code.