Even as robotics experts, universities and tech luminaries sound the alarm about the potential for a future filled with killer robots, this technology evidently already has arrived … minus the stringent ethics.
There can be no debate about the use of killer robots in the military. We are in the middle of an admitted global drone arms race in which there are mass civilian casualties occurring under a specious U.S. legal framework that has thus far not even excluded its own citizens from being targeted for assassination. There is also a host of ground- and sea-based systems that are equally lethal and continue to be tested in massive joint military exercises.
Time after time we have seen that military developments – normally first employed “over there” – eventually trickle down into the everyday lives of citizens at home. Sometimes the technology is beneficial, most often it is not. When weapons of war are handed down to local police, for example, we can’t be shocked when we are soon having debates about “militarized police” in the United States.
Artificial intelligence is another area that that has ushered in dual-use technologies such as drone surveillance and warfare, robot security guards and police, and self-driving vehicles for military and for civilians. In all cases we are now seeing disturbing misapplications as well as outright system failures. In fact, we are seeing many “firsts” that are threatening to become a trend if not quickly addressed and reined in.
Just a few weeks ago we witnessed the first human death reported from a self-driving vehicle when Tesla’s autopilot sensors failed to detect an oncoming tractor trailer, killing the test driver. Previously, there were ominous signs of this potential when Google’s self-driving cars first had failures that resulted in them being hit, but later actually caused an accident with a bus.
Aside from the technical challenges, questions have been raised about the ethics and morality that will be required in certain fatal situations. That area, too, is raising eyebrows – is it right to sacrifice the lives of some to save others?
The standards are already becoming morally complex. Google X’s Chris Urmson, the company’s director of self-driving cars, said the company was trying to work through some difficult problems. Where to turn – toward the child playing in the road or over the side of the overpass?
Google has come up with its own Laws of Robotics for cars: “We try to say, ‘Let’s try hardest to avoid vulnerable road users, and beyond that try hardest to avoid other vehicles, and then beyond that try to avoid things that that don’t move in the world,’ and then to be transparent with the user that that’s the way it works,” Urmson said. (Source)
These incidents and dilemmas have thus far occurred during training and testing, which might mitigate some of the seriousness, but nonetheless points to some genuine flaws that should preclude these vehicles from being widely employed.
As a quick aside, it’s essential to keep in mind that Isaac Asimov offered the world his “Three Laws of Robotics” in 1942; although they appeared in fiction, they since have been widely acknowledged within mainstream robotics, and in their original form are of such simple perfection that there is really no excuse for the errors that we are seeing:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Nevertheless, artificial intelligence has already gained a foothold in all types of real-world service industries, as well as security, despite it not being a stretch to imagine the very same challenges and dangers that have appeared with self-driving vehicles and other autonomous systems.
A marketing robot made the news for having “got lost,” leaving its staging area and wandering into a busy traffic intersection in Moscow, Russia. It was downplayed as “a little adventure” by the robot’s owner, but the fact is that an autonomous robot failed to keep its bearings and wound up posing a genuine threat to human driver safety.
The truth is that researchers are still in the process of developing foolproof sensor systems for robots to recognize their surroundings, yet they continue to be deployed into the real world. Case in point: the security robot.
Security robots have become a special area of interest for developers. Since at least 2013 there has been a worldwide initiative into robotic security focusing on prisons, care facilities, schools, and malls. However, a recent run-in with robotic mall security didn’t go well for one toddler at Stanford Shopping Center in California.
The parents of a young boy who got knocked down and run over by a security robot at Stanford Shopping Center want to get the word out to prevent others from getting hurt.
They said the machine is dangerous and fear another child will get hurt.
Stanford Shopping Center’s security robot stands 5′ tall and weighs 300 pounds.
It amuses shoppers of all ages, but last Thursday, 16-month-old Harwin Cheng had a frightening collision with the robot. “The robot hit my son’s head and he fell down facing down on the floor and the robot did not stop and it kept moving forward,” Harwin’s mom Tiffany Teng said.
“Maybe they have to work out the sensors more. Maybe it stopped detecting or it could be buggy or something,” shopper Ankur Sharma said.
Harwin’s parents say what’s even more worrisome is that a security guard told them another child was hurt from the same robot days before.
(Source) [emphasis added]
Here is that security robot in action:
But this still pales in comparison with what happens when security robots are given weapons … which is exactly what happened in the recent Dallas shooting.
As Claire Bernish reports in her article “Decision to Blow up US Citizen With Robot Was Improvised in Less Than 20 Minutes,” a robot that was previously used with benign intent to reach out to a suicide threat was modified to become an instrument of death equipped with a pound of C-4 explosive:
Manufactured by the military-industrial complex’s darling, Northrop Grumman, this tactical robot “is driven by a human via remote control, weighs 790 pounds and has a top speed of 3.5 mph,” as the Washington Post described. “It carries a camera with a 26x optical zoom and 12x digital zoom. When its arm is fully extended, it can lift a 60-pound weight. The ‘hand’ at the end of the arm can apply a grip of about 50 pounds of force.”
Interestingly enough, the $151,000 tactical robot provided a far more life-affirming service just one year ago.
According to Metro UK, the same model once assisted the California Highway Patrol when negotiations with a man threatening to kill himself by jumping from a San Jose overpass failed—by delivering a pizza.
In just one year, a pizza-delivering robot with the potential to save human life during bomb threats or similar situations became a casually-deployed, due process-stripping weapon of war against a U.S. citizen.
We are now left to wonder what more is on the horizon as artificial intelligence advances to the point where humans feel compelled to completely take the leash off of this technology even in war and domestic security.
China already is prepared to unleash its robot army as a means of combating terrorism. And we are now being warned by the United Nations that terrorists very well could establish their own killer robot army … which will presumably require us to counter that with even more lethal robotics in yet another arms race.
It appears that we might have to be prepared to come full circle in this march toward technology being used to provide security – at the very root of this drive will always be a human component. Until we solve the issues surrounding our own misapplication of morals and ethics (our real military, police and security injure and kill other humans after all), we might be destined only to provide a more lethal upgrade of ourselves.