The legal dilemma surrounding self-driving car accidents

CONTRIBUTED BY TANELI LAHTINEN VIA UNSPLASH
CONTRIBUTED BY TANELI LAHTINEN VIA UNSPLASH

ONCE CONSIDERED a thing for the distant future, self-driving cars have become a steady reality. Experts predict that the autonomous car industry’s market size is going to reach $11.03 billion by 2028[1]. This is because these cars have definite advantages like more convenient operation, greater accessibility for disabled drivers, and guarantee of improved traffic safety. However, despite all of these benefits, there remains a pressing question regarding what standards legislatures across the globe should set to identify the culpability in driverless car accidents. 

 

Complication in legal proceedings

   Since policymakers have yet to establish a universal standard for the attribution of legal responsibility for self-driving car accidents, authorities in different countries have tackled this problem in contrasting ways. In 2018, an Uber vehicle set on autopilot in the United States failed to detect a pedestrian and killed them. Prosecutors charged the driver with homicide while the company Uber did not bear any legal responsibility[2]. A year later a similar case occurred again in the United States when a Tesla car failed to register a red light, slammed into another car, and killed its occupants. Even though authorities confirmed that the car was on autopilot, the driver was still charged with unintentional manslaughter. Experts from various U.S. institutions insisted that the driver was at fault, stressing that drivers should always be alert on the road even if the car might be on autopilot[3]. On the other hand, a new 2022 proposal in the United Kingdom states that car manufacturers should be held accountable for such accidents. The Law Commission of England and Wales believes that if a car crash occurs due to malfunctions of the autopilot feature, it is the automaker’s mistake for selling a defective product[4]. In an interview with The Yonsei Annals, Bock Gene (Prof., UIC, Science, Tech. & Policy) suggested that such a discrepancy in legal approaches to driverless car accidents may be attributed to congressional lobbying by automakers and insurance companies. “Traditionally, lobbying by interest groups in the United States is much more effective in the legislative process than in the United Kingdom, which is known to be a typical politicization of science,” said Professor Bock. 

   The main reason why it is so challenging to determine a legal guideline for such cases is because the technology of driverless cars is simply too complex. Currently, manufacturers follow a six-level autonomous vehicle rating system to determine a car’s self-driving capacity. Starting from Level 0 for no automation, the system goes up to Level 5 for cars with full automation. This system may perfectly highlight the differences between automation levels from an engineering perspective, but legislators may find difficulty in navigating the intricacies of this technology[5]. For the average consumer with less knowledge on cars, such nuances could be even more obscure. In the end, what is considered an “autonomous” car is imprecise since each level of automation requires different degrees of human assistance. The current complication in court procedures can be alleviated only after the notion of a self-driving car is comprehensively defined. 

 

Acknowledging safety hazards

   Adding onto the discussion of legal accountability in driverless car accidents, we must also consider the safety concerns regarding the cars’ autonomous technology and their legal implications. After all, no matter how advanced the technology may be, every software is prone to occasional system failures. Automakers claim that their self-driving cars are completely safe, but the facts say otherwise. In 2020, the American Automobile Association (AAA) discovered that drivers of autonomous cars experienced system irregularities, citing instances when they had to resume control over the car on short notice whenever an error occurred[6]. Past incidents confirmed the organization’s findings. For example, in a 2016 accident involving a Tesla car, the vehicle failed to detect a truck and crashed into it at full speed, with the car’s driver receiving multiple injuries[7]. According to Professor Bock, contrary to how it is marketed, the autopilot system cannot be perfect because the environments and situations a self-driving car encounters are “limitless and under constant change.”

   Aside from internal system failures, automobile software is also vulnerable to external interference in the form of cyber-attacks. In 2015, hackers gained access to a non-autonomous Jeep’s brakes and accelerators through its entertainment system and crashed it into a ditch[8]. The possibility of something similar happening to self-driving cars is not implausible. According to the European Agency of Cybersecurity (ENISA), the self-driving car technology’s dependence on communication links makes it more susceptible to cyber-attacks. Although there are no accounts of high-level security breaches just yet, the rising prominence of the autonomous car industry definitely makes it an impending danger. Ultimately, automakers’ misleading marketing of their self-driving cars’ alleged foolproof technology goes hand in hand with the already imperfect automation systems. With insufficient knowledge of the autopilot feature’s potential vulnerabilities, people may buy a self-driving car with unrealistically high expectations, failing to realize that they should remain attentive to driving conditions to prevent any tragic occurrences. 

 

What are the possible solutions?

   The introduction of new policies that can accommodate an already rapidly evolving industry is a challenge. Nevertheless, it is that rapid growth that necessitates the urgent addressing of the legal dilemmas to avert any dire consequences. What makes this issue so difficult to clear up is because “the responsibility cannot be clearly split between the car manufacturer and the driver in a binary manner” due to the multitude of variables involved in an accident, according to Professor Bock. He stressed that in order to fairly allocate the liability for a self-driving car accident, “there needs to be a clearer delineation of which exact part within the automobile malfunctioned, who was involved to what extent, and the overall surrounding situation.” As such, each case should be analyzed with extreme caution and precision. 

   The aforementioned stance of U.S. experts blaming passive drivers for self-driving car accidents can come off as unfair to the drivers who had simply trusted the technology to do its job. However, it is also somewhat reasonable since a fully autonomous car that is proven to be completely safe does not exist yet. All currently existing self-driving cars require a certain level of human assistance—something that drivers should be aware of. Given the current state of the industry, both sides should handle their duties in a sound manner. Automakers should continue their research on improving the autonomous driving systems, while drivers should not depend on technology that is far from perfect. 

 

*                 *                 *

 

   In the status quo, it appears that setting a legal standard for self-driving car accidents is ambitious, but it does not mean that pursuing one is impossible. However, it would require a unique approach that integrates not only the expertise of both the policymakers and the engineers, but also the drivers’ perspectives on how the proposed solutions could affect their livelihoods. 

 

[1] Fortune Business Insights

[2] The New York Times

[3] NBC News

[4] BBC

[5] Perforce

[6] AAA Newsroom

[7] The Guardian

[8] The Guardian

저작권자 © The Yonsei Annals 무단전재 및 재배포 금지