Can Autonomous Vehicles Make the Right ‘Decision?’

The famous “Trolley Problem” might not really be the problem automated-vehicle ethics have to solve.

Image: kentoh/Shutterstock

The impact of artificial intelligence (AI) on society is on the rise and many are beginning to question the “ethics” of these systems, leading companies to jump to action.

Microsoft’s president met with Pope Francis to discuss industry ethics, Amazon is helping to fund federal research into “algorithm fairness,” and Salesforce has hired an “architect’ for ethical AI practice and a “chief ethical and human use officer.”

The need for introspection in ethical decision-making in the automated-vehicle (AV) space is just as critical. With every AV failure and fatality, the public questions how the vehicle arrives at the “decisions” it makes. Industry watchdogs call for greater transparency as design teams work to apply AI, machine learning and other tools to AV software in the most appropriate manner.

As we make decisions to employ AI, it’s important to think about ethics and the potential legal impact of using AI in design.

Can machines be moral?

As traditional drivers, we regularly encounter moral dilemmas. When you slam on the brakes to avoid hitting a pedestrian who steps in front of your vehicle, you are making a moral decision. We expect AVs to be able to make that same decision. Although the goal of designing these systems would be to avoid all collisions — and the use of sensors and technology seek to make the vehicle far safer than the reaction speed of a human driver — we still are drawn to the question: how is an AV programmed and what set of priorities does it have?

One of the “Trolley Problem” AV ethics scenarios presented by MIT’s Moral Machine project. (Image: The Moral Machine Team)

A system, three laws safe!

Isaac Asimov, in his well-know 1942 short story Runaround, introduced the “Three Laws of Robotics” which state:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Many have turned to Asimov’s laws as a good framework on driverless-vehicle AI ethics. This approach follows a top-down ethical theory where an “Ethical Governor” determines the ethical policy and then selects abstract ethical principles. When functioning, the agent (i.e., the vehicle) chooses between the best possible actions arising from the situation by applying the selected rules. Technically, the vehicle would make a decision that is minimally unethical with respect to the ethical policy created by the designers. As an example, base priorities could include:

  • Rule 1: Do not harm people
  • Rule 2: Do not harm animals
  • Rule 3: Do not damage self
  • Rule 4: Do not damage property

These rules could be prioritized differently or could have additional rules to create a perceived groundwork for an AI system to make its decisions. And these often form the foundation for the most famous discussion in autonomous vehicles.

Ethics and the hypothetical trolley

The modern version of the AV world’s now-famous “trolley problem” was created by Philippa Food in 1967, but the origins of the ethical thought experiment are far older.

In 1905, the University of Wisconsin asked undergraduates to decide whether to sacrifice one person (your child) by pulling a lever to divert a runaway trolley or to let it proceed and kill five people; MIT grabbed this example and established the MIT Moral Machine, where individuals complete a survey to determine what actions an AV should take when different people and situations were encountered. The experiment allowed individuals to decide the survival of different genders, ages, positions (in vehicle or on the road), levels of law-abiding and even species (animals or humans).

One recent study published in Risk Analysis: An International Journal, found that survey respondents generally preferred for the vehicle to remain in its lane and attempt to perform an emergency stop, whether or not that was a feasible option. When asked to “stay or swerve,” more people (often as many as 85%) chose to stay. This preference itself potentially conflicts at times with the simple “preserve life” directive in Asimov’s laws and with many of the base assumptions that Ethical Governors would likely choose.

Another area of interest that can be seen in applying the trolley problem is the influence of culture and geographic differences in decision-making. North American participants, for example, indicated preference for inaction and remaining in the lane. Both Asian and South American respondents had a higher preference for having the vehicle take action.

What happens when the AV fails?

Ultimately, the most direct impact of ethical decisions made by engineers and designers is most felt when things go wrong. When a fatality occurs, investigators and victims will ask why the vehicle made the decision it did and why it failed to take an alternative course of action. Eventually, this will likely find its way to the court system in the construct of a product-liability lawsuit that suggests the vehicle has a design defect in the logic and programming behind the AI software.

Design defects in the automotive sector often are evaluated on whether the foreseeable risk of harm created by a product is greater than that of a product with a reasonable alternative design (RAD). The RAD is such a critical concept that a plaintiff cannot prevail even if the risk exceeds the utility unless there is an alternative design (unless the design itself was so questionable that no reasonable person would ever sell the product).

Engineers and designers in this sector then are left with a challenging question: what is a RAD in an AI system that makes ethical choices impacting life and death?

Is it a design that considers cultural expectancies and biases in programming the logic of decision-making? Perhaps a system that is designed to appreciate the cultural uniqueness of the geographic region it operates in is sufficient to be a reasonable alternative design.

Is it a system that was developed by a team that was aware of the nature of gender bias in algorithms and data sets? The decisions made by a diverse team that spans age, gender, and other cultural norms might demonstrate a reasonable, comprehensive design. Or is the true reasonable alternative design — the human mind?

Perhaps, we ourselves are the yardstick to compare ethical decisions against.

Experience would suggest we seek to incorporate many types of diversity into our designs. We need to understand the ethical issues, even if the Trolley Problem is merely a diversion away from seeking a design that makes the best decisions based on information available at the time. And we need to prepare to answer these questions for a general public that is hoping for an autonomous system that is at least “three laws safe.”

A self-described “recovering engineer” with 15 years of experience in automotive design and quality, Jennifer Dukarski is a Shareholder at Butzel Long, where she focuses her legal practice at the intersection of technology and communications, with an emphasis on emerging and disruptive issues that include cybersecurity and privacy, infotainment, vehicle safety and connected and autonomous vehicles.