Update: A GM spokesperson later told CarAdvice that the liability space is very complex and the company has not made any official comments or position on liability

General Motors says it will take full responsibility if its vehicles crash during an autonomous driving trip, alleviating fears that owners would be held responsible by their vehicle’s artificial intelligence system.

Speaking to CarAdvice in Detroit today, GM’s head of innovation, Warwick Stirling, said that if the driver has handed off full driving responsibility to the car, then the liability shifts to the car’s manufacturer entirely.

“[As for] the question of liability, if the driver is not driving, the driver is not liable. The car is driving,” Stirling told CarAdvice.

GM’s ‘Super Cruise’ system currently allows level-two automation - although GM says it verges on level three - which sees the vehicle able to take full control in a situation such as highway driving.

However, the system still requires the full attention of the driver in case of an event that it cannot respond to appropriately. In these instances, Stirling says, the driver is still responsible.

“In a Super Cruise situation, because the driver is still in the driver’s seat, and they are supposed to be driving and the car is helping them, the driver is still liable.”

As for levels three, four and five autonomous driving, Stirling believes that a complete takeover of the driving functions by the vehicle’s AI will mean no liability for the driver.

“In level four, there’s likely to be no steering wheel no pedal, you’re not driving so you’re not liable,” He said.

But who is liable?

“[It will be] a combination of the fleet owner, OEM and the service provider [that] will cover the insurance. It’s going to be a capital liability, it’s going to be a complex space.”

The issue facing car companies isn’t just the code that has been written to drive the vehicle’s AI; it’s how that AI then goes about machine-learning and teaching itself how to drive, based on the data that it gathers - and whether that system is at fault in an incident, or the system's designer.

“The issue is going to be how black box is your black box, so you will need to look inside the black box and see what decision it was making [before the crash] and who programmed it? Often if you have a machine learning AI system, it teaches itself to do something, so how do you [assign blame]?

For now, the biggest challenge for liability for autonomous vehicles remains worldwide legislation that currently sees the driver of the vehicle entirely liable for any accident the vehicle causes, regardless of whether it’s running in an autonomous mode or not.

“Insurance is going to have to change, a lot of the rules are going to have to change, we are in natural discussion with a lot of the governments on this,” he said.

Stirling believes a fully autonomous vehicle is still more than a decade away, but sees progression to higher levels of autonomy in the very near future, particularly in commercial and fleet applications.