Driverless cars could be programmed to make moral decisions about who to collide with in an accident, say leading lawyers.

In situations where a crash is inevitable, autonomous vehicles may have to be given instructions about how to choose the lesser of two evils – potentially deciding to risk killing a single pedestrian on the pavement rather than several people in the road.

Cars in the future may even need to weigh up the value of individual lives, a report says.

In a technological version of the famous “trolley problem”, the report warns: “Persons generally are entitled to expect that a self-driving vehicle will not collide with and injure them. However, in reality, the situation is much more nuanced.”

With the UK Government claiming driverless cars will be in use on
Britain’s roads within two years, potentially including driverless shuttle buses on the Forth Road Bridge, the Faculty of Advocates says there is an urgent need to consider the implications of the arrival of autonomous vehicles.

The faculty’s comments come in response to a consultation on the implications of driverless vehicles by the Scottish Law Commission (SLC) and the Law Commission in London. 

The SLC says the technology raises a host of legal and ethical questions which the public have yet to engage with. 

The trolley problem famously asks whether a person at the controls of a runaway trolley-car heading for five people on the tracks should pull the lever to switch on to a track where only one person will be hit. Variations ask whether it would make a difference if the single person was Albert Einstein and the five people were housebreakers. 

The Faculty of Advocates says the question could arise with self-driving cars, and even the suggestion that judgments could be made about the value of individual lives is not outlandish.

While an autonomous car might not have enough information to make a choice between a leading scientist and criminals, it says that could change in a country like China, where the government is currently establishing a ‘social credit’ score for its citizens.

“It would be technically possible to implant a chip in individuals which would transmit... their social credit score which could then be detectable by an automated driving system, thereby making it technically possible ... to exercise a choice between Einstein and the housebreakers,” the report explains.

It adds: “we cannot conceive of any circumstances whatever where such a system could be regarded as acceptable in a free, open  and democratic society.”

However the Faculty envisages a future where owners may even be able to choose the ‘morality’ of their car, the report says, adding: “the purchaser might be able to specify the ethical system with which the car is programmed... as well as specifying the paint colour and interior trim”.

The legal implications to driverless technologies are considerable, the report claims.

It warns that as control of driverless vehicles may be distributed between equipment in the vehicle, software in the ‘cloud’ and other vehicles and roadside equipment, establishing fault in any accidents could be impossible.

While autonomous vehicles could be governed by automated programmes based on predictable algorithms, artificial intelligence experts are developing neural networks, systems which make their own decisions.

“If the operation of the system causes an accident, it might be perfectly possible to determine the cause through examination of the source code of a conventional system ...  but where a neural network is involved, it may be literally impossible to determine what produced the behaviour which caused the accident,” the report says.

The Faculty experts conclude that new offences are likely to be necessary to cover the systems set up by companies to control driverless vehicles – and to hold them to account in the case of errors, malfunctions and accidents. 

Vehicles may fail to interact properly with traffic signs or other vehicles or infrastructure, the report says, adding: “consideration will have to be given to how such breaches, which are only possible by automated vehicles, and are not generally covered by offences applicable to human drivers, are regulated, detected, investigated and (if appropriate) prosecuted.”

Many driverless vehicles are likely to demand that a human driver is on board and able to take control in an emergency. But thorny questions include who is responsible if that happens and the car is subsequently involved in an accident, the Faculty says. 

New offences may also be necessary for anyone who interferes with vehicles, roads or traffic equipment in a way that leads to death or serious injury, or deliberately erases data following an accident or incident. 

Meanwhile driverless systems and the companies behind them will need to hold vast amounts of data about journeys, potentially for decades, in case it later turns out to be necessary in criminal cases or claims for damages for personal injuries.

The Faculty’s submission is one of the most thorough of hundreds submitted to a joint consultation  by the Scottish Law Commission and the Law Commission of England and Wales as part of a three-year review to prepare laws for self-driving vehicles. 

Caroline Drummond, commissioner at the SLC said the consultation, which closed on Tuesday, had attracted responses from all walks of life, including insurers, police, transport planners and car manufacturers.

“The advent of driverless cars raises an enormous range of aspects, such as jay-walking which we do fairly freely here but isn’t allowed in other countries. It certainly won’t work with autonomous vehicles.

“From an engineering perspective, from an AI perspective and from a social perspective there are a lot of areas here that really need to be explored,” she said.