Someday, Your Car Might Kill You to Save Others

By Lily Dane

Someday, in addition to transporting you all over town without much input from you, your car may decide if you should live or die.

We aren’t talking murderous, Stephen King novel-style cars like Christine.

We are talking self-driving cars, and concerns that they could be programmed to kill you if necessary.

The cars can assess dangerous situations and react accordingly – including sacrificing your life to save others.

Self-driving cars (also known as “driverless” or “robotic” cars) are expected to be available to the public in just a few years. Google is already lobbying to make the cars legal to drive on public roads.

So far, Nevada, California, Florida, Michigan, and Washington, D.C. have passed legislation allowing testing of the cars and other use on public roads.

About two weeks ago, Google released the details of accidents that involved the company’s self-driving cars. Since 2009 – when Google’s self-driving project began – the cars have been involved in a dozen wrecks:

“In the six years of our project, we’ve been involved in 12 minor accidents during more than 1.8 million miles of autonomous and manual driving combined. Not once was the self-driving car the cause of the accident,” Google said.

There’s no way to really know how safe the cars are until they are more widely used, but there’s another issue to consider: how will they handle impending accidents?

Google’s cars can already manage common driving hazards, but how will the vehicles handle no-win situations where it must choose between swerving into oncoming traffic or steering directly into a barrier, wall, or building?

The computers will be fast enough to make split-second decisions, but…should they?

Matt Windsor of UAB News presented the following dilemma:

They would have time to scan the cars ahead and identify the one most likely to survive a collision, for example, or the one with the most other humans inside. But should they be programmed to make the decision that is best for their owners? Or the choice that does the least harm — even if that means choosing to slam into a retaining wall to avoid hitting an oncoming school bus? Who will make that call, and how will they decide?

Windsor discussed the issue with bioethicist Ameen Barghi, a recent University of Alabama graduate, who said there are two philosophical approaches to this type of question:

“Ultimately, this problem devolves into a choice between utilitarianism and deontology.”

“Utilitarianism tells us that we should always do what will produce the greatest happiness for the greatest number of people,” he explained. In other words, if it comes down to a choice between sending you into a concrete wall or swerving into the path of an oncoming bus, your car should be programmed to do the former.

Deontology, on the other hand, argues that “some values are simply categorically always true,” Barghi continued. “For example, murder is always wrong, and we should never do it.” Going back to the trolley problem, “even if shifting the trolley will save five lives, we shouldn’t do it because we would be actively killing one,” Barghi said. And, despite the odds, a self-driving car shouldn’t be programmed to choose to sacrifice its driver to keep others out of harm’s way.

But can a computer weigh these ethical quandaries properly?

Gregory Pence, Ph.D., chair of the UAB College of Arts and Sciences Department of Philosophy, says no:

A computer cannot be programmed to handle them all. We know this by considering the history of ethics. Casuistry, or applied Christian ethics based on St. Thomas, tried to give an answer in advance for every problem in medicine. It failed miserably, both because many cases have unique circumstances and because medicine constantly changes.

While self-driving cars aren’t going to go out on their own, drive around town, and kill gang members like King’s Christine, do we really want cars making life-or-death decisions for us?

Lily Dane is a staff writer for The Daily Sheeple, where this article first appeared. Her goal is to help people to “Wake the Flock Up!”


Activist Post Daily Newsletter

Subscription is FREE and CONFIDENTIAL
Free Report: How To Survive The Job Automation Apocalypse with subscription

9 Comments on "Someday, Your Car Might Kill You to Save Others"

  1. I’ll be driving a Google car on the 12TH of never.

  2. Can’t wait to see one on the road so I can see just how it reacts to an emergency situation myself.

  3. This is part of a bigger agenda of creating a “post human world”. The plan is replace humans with robots and computers. It is happening all around you. The Globalists have plans to exterminate 6 Billion people but first they need to replace us as much as possible.

  4. good news! These cars will cost so much flippin money that mostly wealthy people will be the only ones to afford it. I’ll stick with my 86 Olds, 02 Caddy and 67 ‘stang.

  5. not only will these cars be killing the driver , just think what a hacker could do……………….

  6. More Hype, than reality…..by people whom think themselves…”thinkers of the future”…..I refer to this sort of stuff as B S……Nothing more

  7. If you think these cars are not coming, think again. If you think they will be too expensive, think again. These cars are coming – they are already here. They will be heavily subsidized, and insurance will be much cheaper than now. Imagine a car that never violates any traffic laws: will not speed, will not make illegal turns, will not operate with a broken taillight, will not run without proper plates and documentation, cannot be operated by anyone but the registered driver, etc., etc., etc. Imagine the money saved on traffic enforcement. Imagine the money saved on insurance claims. This is a wet dream for authoritarians.

    I predict that only the very rich would be able to drive their own cars by themselves. Perhaps people who still want to drive themselves will be able to afford to drive a motorcycle.

  8. It should be ok for a computer to drive us around. I mean if we ever have flying cars computers will fly them for us it definitely won’t be manual piloting. http://www.darcylee.com

  9. And this type of issue is why there will never be humans and coms driving on the same roads. A computer cannot compensate for the “human factor”. Although a self driven car might “decide to do less harm” by hitting this vs that… the car cannot compensate for the reaction of other human drivers. I do see a day when just like the HOV lanes, we’ll have A.I. lanes for cars that are self driven. I am sure some cities will eventually ban human drivers altogether. So you’d drive to a car park outside of town and simply be a passenger for one of many vehicles driving themselves around. And there will still be assholes on bicycles who think they own the road. Hopefully the A.I. system will decide they are all retarded and run them all over on day 1.

Leave a comment