Self-Driving Cars Programmed to Sacrifice As They Hit the Road: “Someone Is Going to Die”

home-whereBy Mac Slavo

Self-driving cars are poised to take over U.S. roads and destroy American jobs … and they will also kill people, even if by accident.

Right now, their makers are in the process of convincing Congress that they can handle their own regulations – even as they continue working out the kinks.

The U.S. Senate subcommittee for Commerce, Science and Transportation heard testimony from Duke University roboticist Missy Cummings, who admitted that fatalities and accidents are inevitable as self-driving cars attempt to integrate with a busy and complex society.

The London Guardian reports:

The robot car revolution hit a speed bump on Tuesday as senators and tech experts sounded stern warnings about the potentially fatal risks of self-driving cars. “There is no question that someone is going to die in this technology,” said Duke University roboticist Missy Cummings in testimony before the US Senate committee on commerce, science and transportation. “The question is when and what can we do to minimize that.”

Automotive executives and lawmakers sniped at each other over whether universal standards were necessary for self-driving cars….

Senators Ed Markey and Richard Blumenthal, who have cosponsored legislation that proposes minimum testing standards for automated drivers… “The credibility of this technology is exceedingly fragile if people can’t trust standards – not necessarily for you, but for all the other actors that may come into this space at this point.”

These “standards” reflect the programming that will make sometimes fatal choices in the mix of situations that may involve innocent bystanders and no-win situations.

In these cases, is there a “moral” gradient that computers and people can see eye-to-eye on?

If the self-driving car is designed to avoid children at all costs, does that mean it could be programmed to kill (or sacrifice) you if/when you are caught inside of a car headed for disaster, or on the opposite side of the road as the child? There are no clear answers.

The standards are already becoming morally complex. Google X’s Chris Urmson, the company’s director of self-driving cars, said the company was trying to work through some difficult problems. Where to turn – toward the child playing in the road or over the side of the overpass?

Google has come up with its own Laws of Robotics for cars: “We try to say, ‘Let’s try hardest to avoid vulnerable road users, and beyond that try hardest to avoid other vehicles, and then beyond that try to avoid things that that don’t move in the world,’ and then to be transparent with the user that that’s the way it works,” Urmson said.

But the “morality” of the decision-making structure of the computer’s processes, and the inevitability of chaos for at least some individuals is only part of the story.

Autonomous vehicles will, ironically, also be quite vulnerable to hacking – as Internet-connected devices in the car can be manipulated and used to take over the commands and data of really any of the newer “smart” cars on the road. The problem will be even bigger as self-driving cars become a bigger part of our life.

“We know that many of the sensors on self-driving cars are not reliable in good weather, in urban canyons, or places where the map databases are out of date,” said Cummings. “We know gesture recognition is a serious problem, especially in real world settings. We know humans will get in the back seat while they think their cars are on ‘autopilot’. We know people will try to hack into these systems.”

“[W]e know that people, including bicyclists, pedestrians and other drivers, could and will attempt to game self-driving cars, in effect trying to elicit or prevent various behaviors in attempts to get ahead of the cars or simply to have fun,” she said.

Back in 2013, a couple of white hat hackers demonstrated how vulnerable a number of newer cars are to hacking. The possibilities are downright frightening – everything from the stereo and windshield wipers to the brakes can be hacked and remotely controlled – or shut off when you need them most. Just imagine what is possible in 2016.

How to Disappear Off the Grid Completely (Ad)

What happens when these self-driving cars of the future disagree with the human passenger about the priorities, or what is allowed in a critical situation – like escaping a car jacking assault or avoiding a cop in pursuit?

Screen-shot-2016-03-16-at-7.03.10-AMIt isn’t hard to see how trusting technology on the roads is going to complicate the future, and restrict our human ability to make decisions about important factors on the road. Let’s just hope somebody programs these intelligent machines with some common sense. I don’t think that Classic Cars have an issue with being hacked or manipulated.

Read more:

Top image credit

You can read more from Mac Slavo at his site SHTFplan.com


Activist Post Daily Newsletter

Subscription is FREE and CONFIDENTIAL
Free Report: How To Survive The Job Automation Apocalypse with subscription