How Safe Is Waymo? A Closer Look at Their Accident Rates and Safety Record

5 min read time
driverless car

Self-driving cars were once the stuff of science fiction, but companies like Waymo are making them a reality. Whether you’ve spotted a Waymo vehicle cruising down your neighborhood streets or read about them in the headlines, you might wonder: just how safe are these autonomous cars?

At Morgan & Morgan, we’re passionate about keeping you informed and protected. While self-driving technology holds exciting possibilities, it’s important to understand its risks. Let’s take a closer look at Waymo’s safety record, accident rates, and what this means for the future of transportation—and your safety on the road.

 

What Is Waymo?

Waymo is a self-driving car company that began as a project under Google’s parent company, Alphabet. It launched its first public service in Phoenix, Arizona, offering fully autonomous rides with no driver behind the wheel. Using a combination of sensors, cameras, radar, and advanced software, Waymo vehicles are designed to navigate roads and respond to traffic conditions without human input.

Waymo claims safety is its top priority. The company frequently touts its state-of-the-art technology, rigorous testing protocols, and robust data collection to refine its vehicles’ performance. Every trip taken in a Waymo car is monitored and analyzed to improve the system's ability to predict and respond to real-world driving scenarios.

This level of precision aims to exceed human drivers’ capabilities, reducing the risks associated with distractions, fatigue, and poor decision-making. However, as with any new technology, there are areas for improvement.

 

Waymo’s Accident Record: The Numbers Don’t Lie

Waymo vehicles are designed to minimize accidents, but they aren’t entirely exempt from them. The company regularly publishes a safety report detailing its accident data, offering a glimpse into how these cars perform.

Between 2019 and 2022, Waymo cars were involved in dozens of incidents in their service area. In some cases, Waymo’s self-driving system was not at fault, but there were also instances where the system did not perform as expected.

For example, in a 2021 collision in Arizona, a Waymo vehicle unexpectedly slowed down, leading to a rear-end crash. The company has explained that such situations can result from interactions with human drivers, particularly in cases of tailgating or abrupt braking. This highlights the challenges of integrating autonomous vehicles into shared roads.

Another notable aspect of Waymo’s accident data is the prevalence of “near misses,” situations where a crash was narrowly avoided. Although not classified as official accidents, these situations provide valuable insights into the complexities of navigating real-world traffic.

 

Are Self-Driving Cars Safer Than Humans?

Proponents of self-driving technology often point out that human error causes 94% of traffic accidents. Waymo’s claim, like many in the industry, is that technology can reduce these mistakes. However, comparing safety between humans and autonomous vehicles involves multiple factors.

Waymo cars excel at tasks like following speed limits, maintaining consistent spacing, and responding to predictable hazards like stop signs and traffic lights. At the same time, they are continuously improving their ability to interpret nuanced situations—such as pedestrians jaywalking or cyclists weaving between lanes—where human drivers may have an advantage due to intuition and experience.

 

What if a Waymo Vehicle Causes an Accident?

One of the most pressing questions surrounding Waymo and other self-driving companies is liability. Who’s responsible when an autonomous vehicle causes or is involved in a crash?
In traditional car accidents, the at-fault driver is typically held accountable. But with self-driving cars, the lines blur. If the vehicle’s software malfunctions or fails to respond appropriately, does the blame fall on the car’s owner, the manufacturer, or the software developer?

These questions are central to ongoing legal discussions, and at Morgan & Morgan, we’re closely monitoring the evolving landscape.

Self-driving cars operate with a combination of software, sensors, and hardware working together to make real-time decisions. Determining liability in the event of an accident requires a thorough examination of the contributing factors, which may include:

  • The role of the human operator (if any)
  • The performance of the vehicle's software and hardware
  • The manufacturer's responsibility in ensuring safe and reliable technology
     

Manufacturer Liability

If an accident is caused by a malfunction in the car’s software, sensors, or other autonomous features, the manufacturer may be held liable. This type of liability falls under product liability law, which holds companies accountable for selling defective or dangerous products.

For example, if the vehicle's software fails to detect a pedestrian in a crosswalk or misinterprets a stop sign, the company that designed the system could be deemed responsible.

 

Operator Liability

Many self-driving cars still require a human operator to supervise and intervene when necessary. If the operator fails to take control in time to prevent an accident, they could share liability. This raises important questions:

  • Was the operator properly trained?
  • Did the operator have adequate time to react?
  • Were they paying attention when the system asked them to take over?
     

Shared Liability

In some cases, multiple parties could share liability for an accident involving a self-driving car. For example:

  • The car’s owner or operator might bear some responsibility if they failed to maintain the vehicle or ignored warnings from the system.
  • The manufacturer could also be at fault if there was a defect in the software or hardware.
  • A third party, such as a pedestrian or another driver, might be partially liable if their actions directly caused the accident.

Careful analysis of the circumstances, often with input from technical experts, helps determine how liability is assigned in such cases.

 

Insurance Challenges

The rise of self-driving cars is transforming insurance policies. Traditional auto insurance focuses on human drivers, but autonomous vehicles often require specialized coverage. As this industry evolves, understanding how policies account for manufacturer liability and autonomous features is key to navigating claims.

 

The Role of Federal and State Laws

Liability in self-driving car accidents is further influenced by a mix of federal and state regulations. Some states require a licensed driver to be present in autonomous vehicles, while others allow for fully driverless operation. These laws can shape how fault is assigned.

For example, states like California require detailed reporting for any autonomous vehicle accidents, helping to inform public safety policies. Meanwhile, states with fewer regulations may encounter unique challenges in liability cases.

 

Morgan & Morgan: Justice With a Human Touch

At Morgan & Morgan, we’re committed to ensuring safety for everyone on the road. As self-driving technology continues to evolve, we’re here to guide you through the legal landscape and provide support when needed.

If you or a loved one has been involved in an accident with a self-driving vehicle, our experienced attorneys are ready to help you seek justice and understand your rights. Contact us for a free case evaluation today.

Self-driving cars represent an exciting future, but we remain focused on protecting your rights and safety every step of the way.

Disclaimer
This website is meant for general information and not legal advice.

Injured? Getting the compensation you deserve starts here.

An illustration of a broken car.

Deep Dive

Explore more information related to the case process.