What Could Go Wrong? U.S. Army Planning to Deploy Autonomous Killer Robots on Battlefield by 2028

By Jay Syrmopoulos

Washington, D.C. – United States Army Secretary Mark Esper recently revealed that the military has a strategic vision of utilizing autonomous and semi-autonomous unmanned vehicles on the battlefield by 2028.

“I think robotics has the potential of fundamentally changing the character of warfare. And I think whoever gets there first will have a unique advantage on the modern battlefield,” Esper said during a Brookings Institution event.

“My ambition is by 2028, to begin fielding autonomous and certainly semi-autonomous vehicles that can fight on the battlefield,” he added. “Fight, sustain us, provide those things we need and we’ll continue to evolve from there.”

In a preview of the U.S. Army’s strategic vision, released on June 6, Esper said the integration of these forces would become a critical strategic component, quoting from the document:

The Army of 2028 will be able to deploy, fight, and win decisively against any adversary, anytime, and anywhere … through the employment of modern manned and unmanned ground combat systems aircraft, sustainment systems and weapons.

When Esper was reportedly asked about concerns regarding autonomous robots being a threat to humanity, he replied in jest, “Well, we’re not doing a T-3000 yet,” referencing the Terminator movie series about self-aware AI threatening the existence of humanity.

Of course, while he jokes about the threat of autonomous killer robots, polymath inventor Elon Musk clearly takes the potential of such a threat much more seriously, as evidenced by his comments at the South by Southwest (SXSW) conference and festival on March 11, in which he said that “AI is far more dangerous than nukes.”

“I’m very close to the cutting edge in AI and it scares the hell out of me,” Musk told the SXSW crowd. “Narrow AI is not a species-level risk. It will result in dislocation… lost jobs… better weaponry and that sort of thing. It is not a fundamental, species-level risk, but digital super-intelligence is.”

I think the danger of AI is much bigger than the danger of nuclear warheads by a lot. Nobody would suggest we allow the world to just build nuclear warheads if they want, that would be insane. And mark my words: AI is far more dangerous than nukes,” Musk added.

As The Free Thought Project reported last month, the Pentagon reportedly plans to spend more than $1 billion over the next few years developing advanced robots for military applications that are expected to complement soldiers on the battlefield, and potentially even replace some of them.

While the development of this tech by the Army sounds like a movement toward better weaponry, and not a digital super-intelligence, as discussed by Musk—the creation of fully autonomous unmanned weapons systems clearly has implications given the potential future development of some type of “digital super-intelligence.”

Esper attempted to allay fears by noting that the Army’s unmanned vehicle program would be akin to the Air Force’s use of Predator drones, and clarified that the idea would be to protect soldiers by removing them from direct combat. In turn, he said, this would enhance tactical ability and mobility, thus paving the way for cheaper tanks due to not having a crew inside in need of protection.

However, due to the complexity of the modern battlefield, a human element would remain part of the process.

“In my vision, at least, there will be a soldier in the loop. There needs to be. The battlefield is too complex as is,” Esper said.

The nuance in Esper’s statement seemingly leaves lots of ambiguity when he says, “In my vision, at least…” which by default likely implies other competing visions that almost certainly include the use of autonomous systems that don’t have a “solider in the loop.”

During his SXSW commentary, Musk noted that rapid advancements in artificial intelligence are far outpacing regulation of the burgeoning technology, thus creating a dangerous paradigm. He explained that while he is usually against governmental regulation and oversight, the potentially catastrophic implications for humanity create a need for regulation.

“I’m not normally an advocate of regulation and oversight,” Musk said. “There needs to be a public body that has insight and oversight to confirm that everyone is developing AI safely.”

While some experts in the field have attempted to dismiss the threat posed to humanity by the development of AI, Musk said these “experts” are victims of their own delusions of intellectual superiority over machines, calling their thought process “fundamentally flawed.”

“The biggest issue I have with AI experts… is that they think they’re smarter than they are. This tends to plague smart people,” Musk said. “They’re defining themselves by their intelligence… and they don’t like the idea that a machine could be smarter than them, so they discount the idea. And that’s fundamentally flawed.”

The billionaire inventor pointed to Google’s AlphaGo, an AI-powered software that can play the ancient Chinese board game Go as evidence of exponential learning capacity of machines. Although it was reputedly the world’s most demanding strategy game, in early 2017, the AlphaGo AI clinched a decisive victory over the top Go player in the world.

While current semi-autonomous systems keep humans marginally in the loop, the advent of fully autonomous systems that operate without any human input creates serious ethical implications in terms of the morality of using killer robots to slaughter human combatants on the battlefield.

Although Esper’s stated preference for keeping soldiers in the loop is noble, the larger U.S. war machine will undoubtedly find some type of efficiency in eliminating the human component altogether to make killing on the battlefield even more “efficient.”

We are clearly on an extremely slippery slope when it comes to killer robots and A.I.  Intellectual giants like Elon Musk and Stephen Hawking have continually attempted to sound the civilizational alarm regarding the extreme dangers inherent to AI.

As an article in the Guardian on Monday pointed out, killer robots are only a threat if we are stupid enough to create them. Now, the only question is: Will anyone heed all these warnings?

Jay Syrmopoulos is a geopolitical analyst, freethinker, and ardent opponent of authoritarianism. He is currently a graduate student at the University of Denver pursuing a masters in Global Affairs and holds a BA in International Relations. Jay’s writing has been featured on both mainstream and independent media – and has been viewed tens of millions of times. You can follow him on Twitter @SirMetropolis and on Facebook at SirMetropolis. This article first appeared at The Free Thought Project.


Activist Post Daily Newsletter

Subscription is FREE and CONFIDENTIAL
Free Report: How To Survive The Job Automation Apocalypse with subscription

1 Comment on "What Could Go Wrong? U.S. Army Planning to Deploy Autonomous Killer Robots on Battlefield by 2028"

  1. who do these robots fight against?
    if it is other robots, and there “battlefield” is in a limited location, there should be no worries at all, however, why must the US or any other nation send robots to do their killing of some foreigners whose only crime is not having been born in (or a citizen of) the US?
    was Esper actually ever in the army, himself?
    perhaps if politicians who decide on and fund these programs were to spend time on the front-line (not in the rear areas, where they might be safe, but right up in the trenches) themselves, there might be less warfare in the world altogether!

Leave a comment