Norway is a large exporter of weapons, which makes the resolution of the debate about creating killer robots an important issue for everyone.
One could debate the overall merits or failings of robotic systems, but an area that clearly has become a point for concern on all sides is the emergence of “killer robots.” According to robotics pioneer, David Hanson, we are on a collision course with exponential growth in computing and technology that might only give us a “few years” to counter this scenario.
Even mainstream tech luminaries like Elon Musk and Stephen Hawking have proclaimed the magnitude of the threat following a reading of Nick Bostrom’s book Superintelligence. Musk flatly stated that robots infused with advanced artificial intelligence are “potentially more dangerous than nukes.”
Robot-controlled missiles are being developed in Norway, and could easily be a path toward that ultimate danger.
Universities such as Cambridge have said that “terminators” are one of the greatest threats to mankind according to their study program Centre for the Study of Existential Risk. Human rights groups and concerned citizens have echoed this with a Campaign to Stop Killer Robots. This campaign has even garnered support from one Canadian robot manufacturer who penned an open letter to his colleagues urging them not to delve into the dark side of robotics. Meanwhile, in the background, the United Nations and the U.S. military have been forced into the debate, but are currently stutter-stepping their way through the process.
As the clock ticks, there is a global drone and artificial intelligence arms race as nations seek to catch up to others who have taken the lead. Perhaps one sign of how fast and far this proliferation extends, seemingly peaceful and neutral Norway is making the news as a debate rages there about plans to have artificial intelligence take over missile systems on its fighter jets. In reality, Norway is actually a large exporter of weapons, which makes the resolution of this debate an important issue for everyone.
There is a general move to augment traditional weapons systems with artificial intelligence. A recent post from the U.S. Naval Institute News stated that “A.I is going to be huge” by 2030. A.I. is foreseen “as a decision aid to the pilot in a way similar in concept to how advanced sensor fusion onboard jets like the F-22 and Lockheed Martin F-35 work now.”
This is the first step toward removing human decision making. According to Norwegian press, their government states this explicitly:
The partially autonomously controlled missiles, or so-called “killer robots”, will be used for airborne strikes for its new fighter jets and have the ability to identify targets and make decisions to kill without human interference. (emphasis added)
Interference?! What, like compassion and having a conscience? Make no mistake, at the highest levels erasing those human qualities is exactly what is intended. This was highlighted in a recent report that certainly has military elite across the planet concerned. It seems that the distance they have attempted to create by putting humans in cubicles thousands of miles from the actual battlefield still has not alleviated the emotional impact of killing. Apparently it’s not a video game after all.
Although drone operators may be far from the battlefield, they can still develop symptoms of post-traumatic stress disorder (PTSD), a new study shows.
About 1,000 United States Air Force drone operators took part in the study, and researchers found that 4.3 percent of them experienced moderate to severe PTSD.
That percentage might seem low, but the consequences can be incredible. Listen to what one former drone operator has to say about his role in 1,600 killings from afar.
The military system sees this reaction as a defect.
“I would say that, even though the percentage is small, it is still a very important number, and something that we would want to take seriously so that we make sure that the folks that are performing their job are effectively screened for this condition and get the help that they [may] need,” said study author Wayne Chappelle, a clinical psychologist who consults for the USAF School of Aerospace Medicine at Wright Patterson Air Force Base in Dayton, Ohio.
Consequently, the U.S. military has been pursuing “moral robots” that can supposedly discern right from wrong, probably to alleviate the growing pushback by those wondering what happens – legally and ethically – to a society that transfers responsibility from humans to machines. Some research is indicating that robotics/A.I. is not yet up to even the most basic ethical tasks, yet its role in weapons systems continues.
The situation in Norway is very similar to that in the U.S. – peace activists are alarmed, the military-industrial complex wants to push ahead, and local government and U.N. oversight dithers:
The Norwegian Peace League, for one, believe the technology may violate international law, wanting a parliamentary debate about the move.
Alexander Harang of the Norwegian Peace League (Norges Fredslag) demands discussion. He is also a member for the international “Campaign to stop killer robots”.
Harang believes that a discussion is highly relevant before final development of Joint Strike Missiles (JSM) made by Norway’s Kongsberg Gruppen. These missiles will be part of the weaponry of the Norwegian Armed Forces’ new fighter jet plane, the Joint Strike Fighter.
Harang said he contacted all the political parties this spring in order to get a debate in Parliament on the potential consequences based on international law in developing more autonomously controlled weapons. This was after the government decided Kongsberg Gruppen would get 2.2 billion kroner ($330 million) more in order to develop the missile.
Such a debate never took place.
Christof Heyns, a UN special investigator, is also concerned about such weapons of the future.
Heyns said: “We have seen during the last decade that the distance between the soldier and the target increase. But what we see now is that the weapon becomes the warrior.”
Ronny Lie (you just can’t make up the accuracy of the name – Ed.), communications director at Kongsberg Gruppen, wrote in an email to NTB: “Remotely controlled solutions for demanding civilian and military tasks have become increasingly more important during the last years. The Norwegian high-tech industry needs to join this development.”
Lie stated his company follows the rules and regulations set by the relevant authorities and that it is not in their mandate to consider any challenges related to international law.
Minister of Defence for Norway, Ine Eriksen Søreide, believes it will serve no purpose to introduce a temporary ban on developing deadly robots.
MP Kåre Simensen challenged Søreide as to what Norway should do when new weapon technology challenges international law.
Søreide replied: “No technology has currently been developed that would fall in under such a situation.”
Søreide agreed that a greater degree of military robotization would raise complex questions. However she rejected suggestion from Christof Heyns to introduce a temporary prohibition against the development of deadly robots until new rules had been established.
The Minister of Defence assured her critics Norway adheres to the rules in place from UN Convention on Certain Conventional Weapons (CCW).
So, what do we do when militaries across the world are not likely to constrain their own profits and power, and international oversight agencies continue to show that they are ineffectual and/or complicit?
Unlike many of the threats we are given each day to digest as the next possible extinction-level event (ISIS and Ebola come to mind) this one is much more likely to be the real deal.
What do you think?
Recently by Nicholas West: