Self-Driving Cars News
We talked a bit about the new frontier of self-driving cars in a recent post, and the news on that front has not stopped rolling in. Self-driving cars may be years away from roaming freely on our roadways, but interest in the developing technologies is intense. We want to share two items of particular interest in today’s self-driving cars news update.
Self-driving Cars and Liability
As this technology progresses, one very pressing question is weighing on the minds of car manufacturers and insurance companies alike. When or if a self-driving car is involved in an accident and is found to be at fault, who is liable for those damages?
Volvo recently stepped up with one possible answer. They announced that they will accept full liability in the event of an accident. They may be willing to take such a bold step because they are confident that, once the technology is perfected, there will be very few “at fault” accidents for autonomous vehicles.
They also hope that this announcement will speed up regulation and legislation regarding self-driving cars, which will clearly require new rules of the road. It is in the best interest of their business to see returns on their considerable research and development investments sooner rather than later. If Volvo is confident that its technology is as safe and effective as it can be, then perhaps accepting liability for all its autonomous vehicles is a safe bet. Of course, they could quickly end up in hot water if they’ve misjudged their technology.
None of the other major players in the self-driving car arena have made such bold claims, but this is an issue that will have to be dealt with as this technology comes closer to reality. Meanwhile, insurance companies are worried that this might blur the lines between manufacturer and insurer. We will watch with interest as all the parties involved navigate these tricky waters.
Robots and Ethical Quandaries
Another growing concern regarding self-driving car technology is just how these vehicles will be programmed for accident scenarios. The dilemma presented is similar to the well-known “trolley problem,” but it is much more likely to play out in the real world.
The trolley problem presents a scenario in which you are the driver of a trolley whose brakes have failed. There are five people ahead of you on the tracks, and you will not be able to stop before you hit them. You see that there is only one person on a different track, and you could divert the trolley to this track and hit only one person. The question is, do you divert the trolley or not? Do you kill one person to save five people?
This scenario is merely a philosophical exercise. It doesn’t have a right or a wrong answer, but any answer has moral implications. In the case of autonomous vehicles, however, this scenario could be all too real. Should a self-driving car act to save a greater number of people (a group of pedestrians, perhaps) even if it means killing their driver/passenger?
What the car would “choose” in this scenario is all down to how it is programmed. Some would argue that the greater good, i.e. saving more lives, is the ethical choice. However, others would say that the vehicle should always act in the interest of its occupants. After all, would you want to ride in a car that you know might choose to kill you?
I’m sure these questions are keeping some poor computer programmer up at night. Both legal and moral issues regarding self-driving cars are complicated and won’t be solved over night. However, it seems inevitable that this technology is on its way. We look forward to finding out the answers to these and other questions.