Ban on Terminator Robots Postponed at United Nations Convention

Update on the move to ban killer robots.

image: NASA

Nicholas West
Activist Post

Ethics is very often the final concern of science, especially where military endeavors are concerned.

Drones and robots are finally becoming front page news after a series of warnings from prominent scientists and researchers who are beginning to see some of the darker side to what is being unleashed upon humanity.

These warnings led to the U.S. military itself to seek out more information about creating moral, ethical robots that could thwart any potential for runaway assassin robots within their ranks.

The uptick in concern culminated with the United Nations recently holding a four-day Convention to further bring out the issue and get comments from those in favor of autonomous robots, as well as those opposed. At the end of the Convention, countries were able to vote on a pre-emptive ban of this technology.  

Weaponized drones are proliferating across the planet a rapid pace, which has led military researchers to conclude that all countries will have armed drones within 10 years. Coupled with this are advancements in robotics and artificial intelligence that literally aim to give life and autonomy to our robotic creations. There is a movement afoot in the area of artificial intelligence that is even introducing survival of the fittest to robots in an effort to create a rival to nature.

What are the rules in a robot/human society? 

Human rights organizations, non-profit groups, and even some universities like Cambridge have been vocal for some time about the threat of “terminator robots.” They have largely been shouted down by the corporate-military complex as Luddites who just can’t comprehend the wonders of science and the vast potential of cooperating with and/or merging with machines. Futurists such as Ray Kurzweil, a director of engineering at Google, only see an inevitable transcendental age of Spiritual Machines where the next stage of human evolution increasingly incorporates a mechanized component to strengthen resilience and perhaps even provide immortality.

This wave of new technology has already arrived in the medical field with DNA nanobots, the creation of synthetic organisms and other genies lying in wait to break the bottle. These developments are a fundamental transformation in our relationship to the natural world and must be addressed with the utmost application of the precautionary principle.

So far that has not happened, but prominent scientists such as Stephen Hawking and those who work in the field of artificial intelligence are beginning to speak out about another side to these advancements that could usher in “unintended consequences.” 

The military was forced to respond. 

The US Department of Defense, working with top computer scientists, philosophers, and roboticists from a number of US universities, has finally begun a project that will tackle the tricky topic of moral and ethical robots. This multidisciplinary project will first try to pin down exactly what human morality is, and then try to devise computer algorithms that will imbue autonomous robots with moral competence — the ability to choose right from wrong. As we move steadily towards a military force that is populated by autonomous robots — mules, foot soldiers, drones — it is becoming increasingly important that we give these machines — these artificial intelligences — the ability to make the right decision. Yes, the US DoD is trying to get out in front of Skynet before it takes over the world. How very sensible. 

This project is being carried out by researchers from Tufts, Brown, and the Rensselaer Polytechnic Institute (RPI), with funding from the Office of Naval Research (ONR). ONR, like DARPA, is a wing of the Department of Defense that mainly deals with military R&D. 

[…]

Eventually, of course, this moralistic AI framework will also have to deal with tricky topics like murder. Is it OK for a robot soldier to shoot at the enemy? What if the enemy is a child? Should an autonomous UAV blow up a bunch of terrorists? What if it’s only 90% sure that they’re terrorists, with a 10% chance that they’re just innocent villagers? What would a human UAV pilot do in such a case — and will robots only have to match the moral and ethical competence of humans, or will they be held to a higher standard?

[…]

The commencement of this ONR project means that we will very soon have to decide whether it’s okay for a robot to take the life of a human…

(Source)

Problem-Reaction-Solution?

One could argue that assigning the military to be the arbiter of what morality is might be the ultimate oxymoron. Moreover, this has all of the trappings of the drone “problem” where unchecked proliferation is now being “solved” by the very same entities who see the only solution as increased proliferation, but with a bit more discretion.

So far, the military-industrial complex has spent countless millions to create an ever-increasing catalog of humanoid robots and the artificial intelligence to equip them with decision-making capability, not to mention a fleet of drones that could begin to swarm on its own. It’s highly unlikely that this trend will be reversed.

Furthermore, we need transparency about who the ethicists will be that are helping to give guidance about morality. Just because one calls themselves an ethicist, or has a title from a major university, does not rule out psychopathy. For just one example, please read this article about a university ethicist who believes in life extension only as a means to offer eternal torment to those deemed by the justice system to be the very worst criminals. Imagine handing over full power to robots to make that decision.

Nevertheless, the discussion is finally on the table out in the open. The subject of killer robots made its way to the world stage at the United Nations in Geneva from May 13-16. Most notably, two experts in the field of robotics, Prof Ronald Arkin and Prof Noel Sharkey, debated the concept of autonomous – and potentially killer – robots. Eighty-seven countries were present at the UN Convention on Certain Conventional Weapons, the organization that addresses the implementation of international controls on weapons that “cause unnecessary or unjustifiable suffering to combatants or… affect civilians indiscriminately.”

Proponents tend to see robots used in relief efforts, or to be put into highly dangerous situations like defusing explosives or to conduct surveillance into the most hostile areas. This is already happening; and as the technology has advanced, their presence on the battlefield working among soldiers is common. So much so that one study concluded that military members can develop emotional bonds to robots. This brings up a range of issues, but certainly at its core is the question of how far we are going to permit robotic intelligence to flourish. Will robots always need approval from a human to conduct their missions? Apparently Russia is one country at the forefront of giving robots the chance to act independent of human control:

Last month, Russia announced a new mobile robot guard designed to gun down trespassers at ballistic missile bases. The twist? They don’t need human permission. The mobile robot, which resembles a gun-mounted tank, can be set to patrol an area and fire on anything it identifies as a target. The announcement came shortly after Russia’s deputy prime minister had called on the military to develop robots with artificial intelligence that can “strike on their own.” (Source)

These types of developments are exactly what was purported to be addressed at the Summit. Incredibly, of the 87 countries attending the gathering, only 5 were in favor of applying the cautionary principle of pre-emptively banning the technology until more information could be gathered and more debate could be undertaken. Instead, the can was kicked down the road until a future meeting of 117 nations in November. That might sound like a quick turnaround, but this technology is advancing exponentially as we speak.

The five countries in opposition – Cuba, Ecuador, Egypt, the Vatican, and Pakistan – were adamant about the need for an immediate ban. Others including France, Germany and the UK urged stronger caution, but would not immediately call for a pre-emptive ban. Predictably, the countries most responsible for the introduction and proliferation of autonomous weapons systems – The United States, Russia, Israel, and China were either silent on the issue, or downplayed the potential risk.

And the global arms race continues.

Naturally, there should be concern that U.N. involvement could be a convenient way to internationalize robotics efforts in the same ways that drone treaties have been proposed, which only serve to put the U.S. in the lead to dictate to all countries. Nevertheless, amid all of the other contrived “global efforts,” runaway technology presents a very real threat and could certainly spill across borders intentionally or not. Now that the issue has appeared on the global stage, it is at least one positive step toward mass awareness of the issue. More debate is certainly called for … and quickly. 

Let’s hope it is not already too late. Now is probably the final opportunity to learn as much as possible about what is being established, and to share this with family and friends and become engaged. It is not hyperbole to suggest that this is humanity’s final opportunity to remain fully human . . . and in charge.

Updated 5/17/2014 

Recently by Nicholas West:


Activist Post Daily Newsletter

Subscription is FREE and CONFIDENTIAL
Free Report: How To Survive The Job Automation Apocalypse with subscription

Be the first to comment on "Ban on Terminator Robots Postponed at United Nations Convention"

Leave a comment