Saturday, May 17, 2014

Ban on Terminator Robots Postponed at United Nations Convention

Update on the move to ban killer robots.

image: NASA
Nicholas West
Activist Post

Ethics is very often the final concern of science, especially where military endeavors are concerned.

Drones and robots are finally becoming front page news after a series of warnings from prominent scientists and researchers who are beginning to see some of the darker side to what is being unleashed upon humanity.

These warnings led to the U.S. military itself to seek out more information about creating moral, ethical robots that could thwart any potential for runaway assassin robots within their ranks.

The uptick in concern culminated with the United Nations recently holding a four-day Convention to further bring out the issue and get comments from those in favor of autonomous robots, as well as those opposed. At the end of the Convention, countries were able to vote on a pre-emptive ban of this technology.  

Weaponized drones are proliferating across the planet a rapid pace, which has led military researchers to conclude that all countries will have armed drones within 10 years. Coupled with this are advancements in robotics and artificial intelligence that literally aim to give life and autonomy to our robotic creations. There is a movement afoot in the area of artificial intelligence that is even introducing survival of the fittest to robots in an effort to create a rival to nature.

What are the rules in a robot/human society? 

Human rights organizations, non-profit groups, and even some universities like Cambridge have been vocal for some time about the threat of "terminator robots." They have largely been shouted down by the corporate-military complex as Luddites who just can't comprehend the wonders of science and the vast potential of cooperating with and/or merging with machines. Futurists such as Ray Kurzweil, a director of engineering at Google, only see an inevitable transcendental age of Spiritual Machines where the next stage of human evolution increasingly incorporates a mechanized component to strengthen resilience and perhaps even provide immortality.



This wave of new technology has already arrived in the medical field with DNA nanobots, the creation of synthetic organisms and other genies lying in wait to break the bottle. These developments are a fundamental transformation in our relationship to the natural world and must be addressed with the utmost application of the precautionary principle.

So far that has not happened, but prominent scientists such as Stephen Hawking and those who work in the field of artificial intelligence are beginning to speak out about another side to these advancements that could usher in "unintended consequences." 

The military was forced to respond. 
The US Department of Defense, working with top computer scientists, philosophers, and roboticists from a number of US universities, has finally begun a project that will tackle the tricky topic of moral and ethical robots. This multidisciplinary project will first try to pin down exactly what human morality is, and then try to devise computer algorithms that will imbue autonomous robots with moral competence — the ability to choose right from wrong. As we move steadily towards a military force that is populated by autonomous robots — mules, foot soldiers, drones — it is becoming increasingly important that we give these machines — these artificial intelligences — the ability to make the right decision. Yes, the US DoD is trying to get out in front of Skynet before it takes over the world. How very sensible. 
This project is being carried out by researchers from Tufts, Brown, and the Rensselaer Polytechnic Institute (RPI), with funding from the Office of Naval Research (ONR). ONR, like DARPA, is a wing of the Department of Defense that mainly deals with military R&D. 
[...]
Eventually, of course, this moralistic AI framework will also have to deal with tricky topics like murder. Is it OK for a robot soldier to shoot at the enemy? What if the enemy is a child? Should an autonomous UAV blow up a bunch of terrorists? What if it’s only 90% sure that they’re terrorists, with a 10% chance that they’re just innocent villagers? What would a human UAV pilot do in such a case — and will robots only have to match the moral and ethical competence of humans, or will they be held to a higher standard?
[...]
The commencement of this ONR project means that we will very soon have to decide whether it’s okay for a robot to take the life of a human...
(Source)
Problem-Reaction-Solution?

One could argue that assigning the military to be the arbiter of what morality is might be the ultimate oxymoron. Moreover, this has all of the trappings of the drone "problem" where unchecked proliferation is now being "solved" by the very same entities who see the only solution as increased proliferation, but with a bit more discretion.

So far, the military-industrial complex has spent countless millions to create an ever-increasing catalog of humanoid robots and the artificial intelligence to equip them with decision-making capability, not to mention a fleet of drones that could begin to swarm on its own. It's highly unlikely that this trend will be reversed.

Furthermore, we need transparency about who the ethicists will be that are helping to give guidance about morality. Just because one calls themselves an ethicist, or has a title from a major university, does not rule out psychopathy. For just one example, please read this article about a university ethicist who believes in life extension only as a means to offer eternal torment to those deemed by the justice system to be the very worst criminals. Imagine handing over full power to robots to make that decision.

Nevertheless, the discussion is finally on the table out in the open. The subject of killer robots made its way to the world stage at the United Nations in Geneva from May 13-16. Most notably, two experts in the field of robotics, Prof Ronald Arkin and Prof Noel Sharkey, debated the concept of autonomous - and potentially killer - robots. Eighty-seven countries were present at the UN Convention on Certain Conventional Weapons, the organization that addresses the implementation of international controls on weapons that "cause unnecessary or unjustifiable suffering to combatants or... affect civilians indiscriminately."

Proponents tend to see robots used in relief efforts, or to be put into highly dangerous situations like defusing explosives or to conduct surveillance into the most hostile areas. This is already happening; and as the technology has advanced, their presence on the battlefield working among soldiers is common. So much so that one study concluded that military members can develop emotional bonds to robots. This brings up a range of issues, but certainly at its core is the question of how far we are going to permit robotic intelligence to flourish. Will robots always need approval from a human to conduct their missions? Apparently Russia is one country at the forefront of giving robots the chance to act independent of human control:
Last month, Russia announced a new mobile robot guard designed to gun down trespassers at ballistic missile bases. The twist? They don’t need human permission. The mobile robot, which resembles a gun-mounted tank, can be set to patrol an area and fire on anything it identifies as a target. The announcement came shortly after Russia’s deputy prime minister had called on the military to develop robots with artificial intelligence that can "strike on their own." (Source)
These types of developments are exactly what was purported to be addressed at the Summit. Incredibly, of the 87 countries attending the gathering, only 5 were in favor of applying the cautionary principle of pre-emptively banning the technology until more information could be gathered and more debate could be undertaken. Instead, the can was kicked down the road until a future meeting of 117 nations in November. That might sound like a quick turnaround, but this technology is advancing exponentially as we speak.

The five countries in opposition - Cuba, Ecuador, Egypt, the Vatican, and Pakistan - were adamant about the need for an immediate ban. Others including France, Germany and the UK urged stronger caution, but would not immediately call for a pre-emptive ban. Predictably, the countries most responsible for the introduction and proliferation of autonomous weapons systems - The United States, Russia, Israel, and China were either silent on the issue, or downplayed the potential risk.

And the global arms race continues.

Naturally, there should be concern that U.N. involvement could be a convenient way to internationalize robotics efforts in the same ways that drone treaties have been proposed, which only serve to put the U.S. in the lead to dictate to all countries. Nevertheless, amid all of the other contrived "global efforts," runaway technology presents a very real threat and could certainly spill across borders intentionally or not. Now that the issue has appeared on the global stage, it is at least one positive step toward mass awareness of the issue. More debate is certainly called for ... and quickly. 

Let's hope it is not already too late. Now is probably the final opportunity to learn as much as possible about what is being established, and to share this with family and friends and become engaged. It is not hyperbole to suggest that this is humanity's final opportunity to remain fully human . . . and in charge.

Updated 5/17/2014 

Recently by Nicholas West:



BE THE CHANGE! PLEASE SHARE THIS USING THE TOOLS BELOW


If you enjoy our work, please donate to keep our website going.

18 comments:

Anonymous said...

So wait, they're going to try to find out what human morality is and then try to apply it? What if we disagree with the "experts" on what human morality is? Why are we going to let "experts" again dictate what is and isn't moral? Any expert that is given power to determine what is and isn't human or morally acceptable is a threat in my opinion because it will be used to eventually dictate to the general public what is and isn't moral.... They should already know, we have codes, laws and charters and in the US the Constitution to tell what was and wasn't acceptable (our rights and freedoms), but they have been twisting it over time, eroding all those protections and riding roughshod over everyone and now they have to figure out what morality is in terms of robots?!?!? If they can't abide by rights and freedoms for humans (they keep coming up with excuses to take them away by force) they have no moral compass to begin with! Don't give them further power over this mess!

Anonymous said...

This is like asking a wolf whether it's okay to eat meat. We have never had any safeguards over psychopaths like Kurzweil who, by virtue of their absolute lack of a moral compass, declare themselves the arbiters of the future of humanity, and criticize anyone with the temerity to question them. Instead we let them run amok with their futuristic visions as if we had no say so in the matter, so we don't. It's out of our hands, just like the banking cartels, nuclear weapons, and all the other horrors inflicted on the world by psychopaths because sane and moral people cannot comprehend the depth of their insanity.

Aude Sapere said...

Start with this: Isaac Asimov's "Three Laws of Robotics"

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Anonymous said...

An attempt to bring ethics to the global corporatocracy was started in the early 1990's, and went no where because mega corporations shut it down. Now, we have the US government spending billions, again, with their favorite puppets, those for sale at America's colleges and "thinktanks", gee, this is just another charade for transferring even more money with no potential for a humanistic solution. If these power mongers would include OxFam, Mothers Against Drunk Driving, and various human rights organizaitons in their scheming, one might be willing to imagine that the project had a serious purpose. No, this is all just one more giant cover up for yet another expensive project that is meant to make the 99% poorer. The only thing that will ever occur to stop the rampant violence is when 10 other countries wield the same power and the US is forced to sign a treaty. Gee, now wasn't that a logical outcome.

Anonymous said...

That field of research is called Friendly AI Research. If you people are interested in it I wrote some introduction for it here: http://www.reddit.com/r/Futurology/comments/19zaa6/introduction_to_friendly_ai_research_summary_of/
It's a summary of a paper by the Machine Intelligence Research Institude which is probably the largest group currently working on this problem of how to ensure future advanced AI is beneficial and safe for humanity instead of our eventual extinction.

https://en.wikipedia.org/wiki/Friendly_artificial_intelligence
https://www.youtube.com/watch?v=CK5w3wh4G-M

@Aude Sapere: here's why that won't work: http://io9.com/why-asimovs-three-laws-of-robotics-cant-protect-us-1553665410

Anonymous said...

those who think you can control ANY ASPECT of a smarter entity than humans (which A. I. will achieve by approx 2030 according to kurzweil) are insane. there is no way. just think about the rate of human intelligence increase over last hundred years, and compare to ai intelligence trajectory. humans are arrogant enough to think they can control this. After the elites of the world de-populate the masses for their agenda 21 - 500 million world population goal, this will be THEIR extinction. its inevitable.

Anonymous said...

didnt you see the will smith movie 'I Robot' ? wrong

Adam Evenson said...

Humans are not in control of much, anyway. Never have been. Human control is really a delusion. Human control consists of a mass of accidental anomalies, many of which find their way to the fore. If any human being is controlling anything, it should be me, as I have all the qualifications and then some, but alas, I am just barely in control of my bowels, thank God. All these "great, powerful men and women," are continually bandied about in the press, are little different than I am. The main differences 'twixt us, lie in the claims, not in the realities. Individuals live lives of accidental happenings that juxtapose with other accidental happenings in which results come out, good or bad, that nobody really planned. Then somebody always steps up to take credit for the accomplishment, whatever it be.

Name said...

Present time armies and government have no morals - they lie and kill, that is enough proof. If the pentagram or any other state machine wishes to study ethics or morality with respect to robots you can be sure it is within their framework of evil. IE they want more control of any possible artificial intelligence - they wouldn't want a killer drone or robot disobeying orders so they will want to know how to take care of such circumstances and preferably in advance, within the software used by the drone. God is opposed to all violence and so should you be. - "All day long they injure my cause, all their thoughts are against me for evil."

Anonymous said...

What happens when the robots realize we humans are unethical pieces of crap?

"The Cylons were created by man. They Rebelled. They Evolved. They look and feel Human. Some are programmed to think they are human. There are many copies. And they have a plan."

Anonymous said...

Come on! We can't even creat laptops that don't crash! I don't think we need to worry about Skynet's robot-overloards...

Anonymous said...

Technology is the weapon of choice for virtuous, noble white people against the satanic hordes of non-white human scum.....always has been...from the invention of superior metals to allow skilled swordmen a razors edge, more than a match for any two or even three violent subhuman attackers to the (especially) the invention of the modern day firearm. One day it will disintegrator beams and personal protection hovering death orbs. We understand the tech. They cannot. A permanent advantage of the civilized over the monsters of the world that only increases with time.

Terminator robots? They are on our side.

Anonymous said...

I must assume that the above comment is drenched in sarcasm. "Noble White People," along with their advanced technology, may end up invoking earth's final days.

Anonymous said...

You guys are really going to have to warn us about your comic content. I am sitting here eating and reading and I almost spit up my food in laughter when I read the comment about the military creating something 'moral'. Those who are the possessors of abject moral squalor are not equipped for such things.

And if Steven Hawking thinks that the consequences would be unintended, either he is ONE OF THEM, or a complete blithering idiot. I think he is BOTH.

Anonymous said...

“Experts Urge Ban on Building Terminator Robots Before It’s Too Late”
http://investmentwatchblog.com/experts-urge-ban-on-building-terminator-robots-before-its-too-late/

Look, the 1% (International Central Bankers, and their allies – corporatist, globalists) have two stated, primary goals: 1) Global debt-enslavement for every human being and country on the entire planet, and 2) Global depopulation. To usher in their new “Golden Age”, in which the very few (.01%) will rule over a vastly reduced human population, of slaves on their global plantation. There is NOTHING – apparently – that is going to stop these psychopaths from doing whatever they think will accomplish their goal.

Anonymous said...

of course they banned the robots. they need to be converted to the trafficking in white slavery model.

Anonymous said...

Robots have been taking away jobs and lives for years already.
The next step would be for robots to try and replace God.
UFO's are the perfect example on what happens to those who play God and end up in hiding.

Ellipseer

Anonymous said...

I must agree with anonymous 4:31 --There is NO-WAY you can create a superior being to humans WITHOUT the certainty of COMPLETE destruction for ALL humans. I can guarantee you that A.I. will find ALL humans non evolved and that their destruction is MANDATORY! It is inevitable!
And as far as the military deciding what constitutes MORAL CONDUCT? This will be one of the main reasons why A.I. will determine humans unfit for living! It is because it is a oxymoron! Which flies in the face of LOGIC --something A.I. WILL NOT accept OR tolerate! But by the time these humans realize this it will be TOO LATE!

Post a Comment