Cars

Driverless Car Accidents- Who’s to blame?

driverless_car_banner

Driverless cars really are a thing. Google’s grand experiments in the field have fuelled a frenzy of interest in the concept, and almost all of the automobile’s biggest players are now delving into the market.

Still, a number of concerns hang over the progression of the technology. Most notably is one surrounding liability. As with any technology, things are eventually going to go wrong. So, when a driverless car is involved in a crash, where does the finger of blame point to?

 

Volvo Does the Right Thing

It’s one of the greatest concerns for driverless car sceptics, and one that regulators in the US are yet to tackle. Thankfully, the manufacturers are the first to break cover, with a number of high profile players in both tech and automobiles outlining their intentions to take liability for accidents involving driverless cars. Leading the pack are Volvo, who recently said that they will hold their hands up and accept full responsibility for accidents involving driverless cars manufactured by the Swedish firm. Joining them are Google and Mercedes, and the trio all hope that progression can now continue with a major issue set aside.

The question remains however, are they right in accepting responsibility?

driverless_car_companies

Google have been testing their driverless automobiles on the streets of California since 2012, clocking up over 1 million road miles in the process. Most recently, their bug-styled self-driving cars have been cruising around Mountain View (albeit limited to 25mph). During the seven figure sum of miles clocked up, Google’s cars have been involved in 16 incidents of varying severity (almost the majority have been described as minor). In every single reported case however, Google claim that rather than driverless cars being at fault for the accidents, it was actually the fleshy conscious-laden humans driving the other car who were at fault.

If it is indeed the case that the hands of driverless cars remain clean, why are they stepping up to take responsibility for something that isn’t their fault? In a speech in Washington DC recently, the President of Volvo Cars, Hakan Samuelsson, explained what he believes is holding back the technology:

“The U.S. risks losing its leading position due to the lack of federal guidelines for the testing and certification of autonomous vehicles…Europe has suffered to some extend by having a patchwork of rules and regulations. It would be a shame if the U.S. took a similar path.”autonomous_car_scene

Essentially then, those involved in developing driverless tech are hoping to nudge the industry by flinching in an ongoing standoff over liability. In theory then, Volvo’s move should make it easier to create legislation on the subject, and allow developers to work around those guidelines. Any laws on use or experimentation with driverless cars is a federal matter in the US, held back by concerns over what happens when they crash. As a result, only four states currently give a green light over their use. For Volvo and other companies involved, this makes it impossible to roll out technology country wide. Shackling the industry, the decision to take liability for accidents is aimed at removing a significant hurdle in the industry.

And quite right too, some will say. After all, how can a customer be held accountable for an accident they likely played no actual part in? Ultimately, developers are certain that autonomous vehicles will reduce the amount of accidents on our roads; something they’ve expressed by taking responsibility for all incidents they’re involved in, regardless of whose fault it is.

Will the removal of this sizeable obstacle silence the autonomous nay-sayers? Unlikely. There will forever be a body of cynics who doubt the ability of driverless and human-driven cars to coexist on the roads. An issue that’s been consistently raised by those developing the tech has been ‘teaching’ driverless cars how to operate alongside humans.

 

Autonomous All-Clear?

Autonomous cars are programmed to stringently stick to the laws of the road. As someone who uses his car every day, I don’t think it’s too outrageous to suggest a number of motorists on the road don’t stick entirely to the law. Flying through an amber light as it flicks to red; nipping through a zebra crossing when someone is waiting. Just little things that only a risk-tasking human brain might make.

This can leave goody-two-shoes autonomous cars at a disadvantage. Take one example of an incident Google reported in September. A Google self-driving car approached a pedestrian crossing (or cross walk as they call them over the other side of the pond). It slowed to allow a pedestrian to cross, as it should. Unfortunately, a human-controlled sedan wasn’t so alert to the pedestrians and careered in to the back of Google’s car.

The point being, humans are occasionally inclined to bend the laws of the road, and might expect other road users to do the same. Here for example, the car crashing into the back of Google’s autonomous vehicle may well have expected the car in front to nip through the crossing before the pedestrian arrived. And as anyone who has driven abroad will tell you, the culture of driving in other countries changes.

The challenge for developers involved is not to teach cars how to stick to road laws, but rather how to seamlessly integrate themselves with fleshy, mistaken-ridden motorists we call human beings.

 

dashcams-title

Most Popular

To Top