Driverless vehicles, in the case of brake failure or an inevitable collision, might have to decide who lives and who dies. Can we be confident in how these robotic vehicles will behave in an emergency? Can we even agree on how they should behave?

The race to develop self-driving cars is on worldwide. In Ontario, cars with no driver behind the wheel have been approved for testing on public roads, and autonomous shuttle buses have been deployed in Quebec, Alberta, and British Columbia.

Scroll down to explore the ethical implications of sharing the roadway with vehicles driven by algorithms instead of people.

The two dilemmas presented above are similar since they ask you to sacrifice one person to save five but different in how active a role you must take in the death of the person sacrificed. Even this little nuance is enough to change what people feel is the correct ethical choice.

Manufacturers of self-driving cars will have to teach their cars how to behave in life-threatening situations. Should these vehicles be programmed to take whatever actions save the most lives even if that means swerving to kill a person not initially in danger? Should they risk killing the driver of the car to save people on the road? Should people or property be prioritized?

The Massachusetts Institute of Technology (MIT) Moral Machine experiment tried to answer some of these questions. Beginning in 2016, researchers conducted an international survey via the web in which respondents were asked to resolve different trolley problem situations. Edmond Awad, project lead for the experiment, said the goal of the project was to "collect data about the human perception of moral decisions to be made by driverless cars" as well as to promote discussion.

According to Marc-Antoine Dilhac, associate professor of ethics and political philosophy at the University of Montreal, one of the limitations of this type of study is that there is no right answer to these moral questions. The answers to the survey reflect "the biases of the society in which the participants were raised.... There is no criteria for what is morally correct, independent of the game."

For Awad, it would be irresponsible to determine how self-driving cars should behave on the sole basis of a public survey like the Moral Machine experiment. Rather, he believes public preference should be only one factor for experts and policy-makers to consider in regulating this new technology.

If self-driving vehicles are so ethically fraught, why would we want them on the road at all?

According to a report on automated and connected vehicles by the Senate standing committee on transport and communications: "There is no doubt that automated and connected vehicles will save lives."

In 2016, there were 1,669 deaths and 116,735 injuries on Canada's roads. Around 94 per cent of traffic collisions were caused by human error.

Autonomous vehicles also offer the potential benefits of bolstering public transportation options and allowing more freedom to those who aren't able to drive such as the sight-impaired and the elderly.

But the Senate committee report also says that mass deployment of autonomous vehicles could cause the loss of driving-based jobs and worsen urban sprawl by making longer commutes more tolerable. Vehicles whose software connects to the internet could also be vulnerable to hacking and be used to surveil drivers, among other privacy and cybersecurity concerns.

Mary Cummings, director of the Humans and Autonomy Lab at Duke University in North Carolina, believes that we are still far from needing to discuss the finer points of ethics when there are still so many concrete problems to solve. "We really fundamentally do not know how to test [driverless car algorithms] for any levels of predictability or certification."

According to Cummings, it's the quality of the computer code driving the vehicles we need to worry about. "The thing that keeps me up at night as a roboticist [is] the fragility behind computer vision systems. And that is the nexus of what is wrong with driverless cars."

Given that humans can't agree on basic ethical principles, and the current state of the technology, some critics are alarmed at the speed with which autonomous vehicles are being deployed.

American ride-hailing company Uber is developing self-driving cars and is testing these vehicles on Ontario's roads. In an interview, Raquel Urtasun, the Toronto-based chief scientist of the Uber Advanced Technology Group, said that while ethical dilemmas like the trolley problem are interesting from an academic standpoint, "it's not the thing we're focussing on right now." Instead, she pointed to the figure for global car crash deaths, nearly 1.25 million each year according to World Health Organization statistics. "If we don't develop these things, it's not going to change," she said.

Self-driving car manufacturers are evasive when it comes to explaining how their cars would act in an emergency.

Ryan Robert Jenkins, assistant professor of philosophy at California Polytechnic State University, said his industry sources are quick to brush aside the trolley problem. He expects companies will program their cars to brake in a straight line even if there is a potential to save lives by taking other actions.

According to Jenkins, self-driving car manufacturers are likely to be sued in any accident involving their vehicles, and they would prefer not to confront a lawsuit where people were injured who otherwise would not have been. "I suspect you'll see companies programming a car to just brake in a straight line because that gives them a great amount of plausible deniability."

As the previous examples show, not making a choice is itself a choice with potentially fatal consequences. Even if inevitable collisions are rare, programmers will still need to apportion the risk of accident between different parties, decide how aggressively cars should drive, and determine when the vehicle should brake or swerve.

Currently, many of these decisions are made by autonomous vehicle manufacturers. In Canada, autonomous vehicle approvals are the joint responsibility of federal and provincial governments. Provincial governments are in charge of licensing vehicles, approving tests and guaranteeing the safe operation of vehicles.

Provincial governments have not demanded to see the source code underpinning vehicles' driving software prior to allowing participation in self-driving pilot programs.

According to Transport Canada, autonomous vehicle trials have only occurred in Ontario, Quebec, Alberta and British Columbia.

In 2016, Ontario launched a 10-year pilot program allowing the public testing of autonomous vehicles. Participants include companies and academic institutions such as Uber and the University of Waterloo. Recently announced changes to the program will allow cars with no driver behind the wheel to drive on public roads.

Ontario's Ministry of Transportation declined to make a representative available to answer questions by phone or in person. In an email, a representative said the goal of the program is to "optimize the transportation system, promote economic growth and innovation, and align with other jurisdictions where possible, while protecting road safety."

In Quebec, Keolis Canada operates a 15-passenger electric shuttle bus provided by French company Navya on a two-kilometre stretch of public road in the city of Candiac. Stéphane Martinez, director of safety policy at Transport Quebec, admits there's a risk with anything that moves but says that risk is a calculated one. According to Martinez, the risk-control measures for autonomous shuttles are significantly higher than for other vehicles on the road. "We don't take Quebec's population as hostages or as guinea pigs."

Ontario and Quebec have recently approved the public use of cars where computer systems can control the vehicle, as long as a vigilant driver is present to take control when needed. These are classified as having Level 3 autonomy, according to a scale produced by engineering standards body SAE International (formerly Society of Automotive Engineers). Level 3 vehicles are not yet available for sale in Canada.

However, an eye-tracking study published by the Institute for Transport Studies of the University of Leeds in the United Kingdom shows drivers require as long as 40 seconds to fully regain "adequate and stable control of driving from automation." In addition, a safety report published by Waymo, a subsidiary of Google's parent company and an industry leader in autonomous cars, said that human drivers "were not monitoring the roadway carefully enough to be able to safely take control when needed." The company is focusing on fully autonomous vehicles (SAE levels 4 and 5) to avoid what it calls the "handoff problem."

In March 2018, a vehicle belonging to American ride-sharing company Uber killed pedestrian Elaine Herzberg, 49, as she crossed a road in Tempe, Ariz. An initial National Transportation Safety Board report indicated the vehicle was configured to require driver intervention for emergency braking. However, the driver failed to take control from automated systems in time to avoid the accident. Uber spokesperson Sarah Abboud confirmed that in the aftermath of the crash the company overhauled its operating practices to be less reliant on the driver. Uber, which participates in Ontario's self-driving car pilot program, suspended road testing in North America as a result of the crash but has since resumed limited testing.

Electric-car manufacturer Tesla's Autopilot system has also been involved in multiple fatal car crashes worldwide.

With interest in autonomous vehicles rising as the technology advances, Canadian governments are faced with their own moral dilemma: How much risk is tolerable today to potentially save lives in the future?

Advertisement