Who's At Fault? The Intricate Web of Liabilities In Self-Driving Cars
Sat, April 17, 2021

Who's At Fault? The Intricate Web of Liabilities In Self-Driving Cars

A self-driving vehicle operated by Uber struck and killed Elaine Herzberg during real-world testing with a human emergency driver in the vehicle on March 18, 2018. / Photo by: Sean Leonard via Shutterstock

 

While the concept of autonomous vehicles has moved from a science fiction concept to reality, when brought to the table, technology sometimes does not meet our expectations, wrote Ron Schmelzer of business news Forbes. Autonomous vehicles have the potential to enhance safety and revolutionize the way cities transport individuals, said John Maddox of financial and information news website CNN Business. 

Even if the future looks bright for self-driving cars, private and public stakeholders, as well as public transport operators, rideshare firms, and local, state, and federal governments must collaborate to formulate consistent industry regulations. Streamlining the deployment and testing of self-driving cars is also a must for these groups to finally bring the concept of autonomous vehicles to reality. 

The First-Ever Recorded Autonomous Vehicle Fatality

A self-driving vehicle operated by Uber struck and killed Elaine Herzberg during real-world testing with a human emergency driver in the vehicle on March 18, 2018. The accident occurred around 10 p.m. when Herzberg stepped into the road as she walked a bicycle outside of a crosswalk. 

Unfortunately, the Uber vehicle and the driver did not notice Herzberg until it was too late, causing the vehicle to hit the pedestrian. Uber suspended its self-driving operations as a response to the accident. The ridesharing company then decided to investigate the horrible incident, prompting it to resume real-world testing. 

Is It Uber’s Fault?

Unsurprisingly, the fatality shook the self-driving industry. Why didn’t the vehicle stop and notice Herzberg? is the human or the technology behind it liable? These are the complex questions that continue to plague the industry. Prior to the accident, Uber had reports and incidents regarding early tests of self-driving vehicles. In Arizona, the state has attracted autonomous vehicles as it has less strict regulations and reporting for self-driving cars. Apparently, a number of Uber’s vehicles have been involved in minor traffic incidents that demonstrate the “immature state of the technology.” 

Did Uber know that its technology was not fully developed? Did the company take a step back to assess previous cases of traffic incidents involving autonomous vehicles? If not, then Uber is at fault here. Uber spokeswoman Sarah Abboud regretted the crash and the firm “has adopted critical program improvements to further prioritize safety,” as quoted by Faiz Siddiqui of major American daily newspaper The Washington Post. 

While seen as a programming oversight, many autonomous vehicle industry insiders said they were surprised that Uber failed to account for “such a basic expectation.”  

 

A number of Uber’s vehicles have been involved in minor traffic incidents that demonstrate the “immature state of the technology.” / Photo by: MOZCO Mateusz Szymanski via Shutterstock

 

Are Humans and Technology At Fault too?

We first have to determine if the individuals involved in the incident are at fault. The pedestrian was crossing the road while walking a bike without using a crosswalk, possibly confusing the autonomous vehicle’s internal system responsible for detecting potential hazards. So, is it the pedestrian’s fault? Maybe. However, reasonable human drivers— if they are paying attention to the road— are likely to notice the pedestrian, swerving or braking to avoid a “last minute collision.” 

Current laws in the United States require a person to be in a moving car to control the wheel. In this case, the individual was there but the vehicle was on autonomous mode, which is not a fully autonomous Level 5 operation. Rather, the vehicle had limited autonomy as the car relies on a human driver as a backup. 

In this context, the driver failed to “provide that backup.” Perhaps the human driver is liable here. But is it reasonable for us to assume that a human can be completely unaware of their surroundings, relying on the autonomous vehicle to do everything and stepping in to prevent a life and death situation in a short amount of time?

Let’s rule out the pedestrian and the driver. The Uber crash garnered attention due to questions about whether an AI-powered car is prepared to deal with the real world. AI technology for cars is currently in development, so anything can happen. Moreover, we can hypothesize that the sensors were not capable enough to detect a pedestrian walking a bike on the road at night. Perhaps there were glitches or smudges on lenses in the visual or sensor technology itself.

Improvements Must be Made

It’s not a good idea to put the blame on one circumstance, as the fault lies with multiple parties. But there are ways to minimize accidents. More focus groups and diverse groups of experts should map out scenarios, suggested Katina Michael, a professor in the School for the Future of Innovation in Society and School of Computing, Informatics and Decision Systems Engineering at Arizona State University. 

Moreover, there should be more safety and mechanical engineers working on the code along with software engineers. The code should also be peer-reviewed by multiple experts, she added. Michael stated, “When we don’t do all this scenario planning and don’t do an exhaustive [job] at the front end, this is what happens.” 

Autonomous vehicles are not perfect, as this innovation is still in its infancy. Regulations should be made to ensure the safety of pedestrians. Firms should not haphazardly roll out self-driving cars without taking responsibility. As self-driving cars slowly become part of the norm, the inevitability of accidents will continue to loom over us.