A.I. Lie Detectors Being Tested To Replace Human Border Guards

By Nicholas West

The future of travel is clearly one where automation will play an increasingly significant role. The biometric roll out is well underway and is set to expand into every aspect of the travel experience.

According to a new report from Defense One, border security questioning also might transfer from human to robotic artificial intelligence as the security apparatus seeks more supposedly effective and efficient ways to screen travelers for their potential threat risk.

While the original push for these systems began with the U.S., it is Europe that is seeking to finally crack the code for a program that will be foolproof enough to employ widely.

Next time you go to Latvia you might be get hassled by a border guard who isn’t even human. The European Union is funding a new pilot project to deploy artificially intelligent border guards at travel checkpoints in three countries to determine whether passengers are telling the truth about their identities and activities.

The system, dubbed iBorderCtrl, works like this: an avatar asks the traveler a series of simple questions like name, dates of travel, etc. The AI software looks for subtle symptoms of stress as the interviewee answers. If enough indicators are present, the system will refer the traveler to a human border guard for secondary screening.

It is important to note that this system will not only be employed at airports, but “The European Union will test the system at train, pedestrian, and vehicle border crossings in Greece, Hungary, and Latvia.”

The idea that computer algorithms can decipher emotional intent has been controversial since testing and limited roll out began in the early 2000s. Even much later in 2016, I reported on a program at the University of Iowa called “The Creepy Study” which sought to decode emotions for political and advertising purposes. Despite claiming only a 35% success rate at the time, many people still don’t know that facial recognition billboards have existed in multiple countries to varying degrees – including the U.S. – dating back to at least 2012 when it was first introduced in Mexico. The technique is called neuropolitics.

A Russian company, NTechLab, later made headlines for its implementation of FindFace, a software that was applied to Russia’s social media site VKontakte and its nearly 300 million users. The software claimed a 70% success rate in matching any photo taken to a social media profile, allowing strangers to identify one another instantaneously. FindFace was an immediate hit, signing up half a million users in its first two months. The same company then took that system, enhanced the software, and applied it to CCTV surveillance cameras. The ultimate plan was the roll out of emotional identifiers being incorporated into Moscow’s estimated 150,000 public space cameras. The company claimed a 94% success rate in being able to detect markers that indicate stress, anger or anxiety.

For now, the science seems to be dubious at best, and we certainly should hope that independent researchers confirm or deny any claims to accuracy with these systems before they get introduced to every aspect of our lives. The U.S., unfortunately, has a history of rolling out technology like this without proper testing, as noted by Defense One:

In 2006, DHS experimented with a program called Screening Passengers by Observational Techniques, or the DARPA-developed SPOT, which sought to teach border patrol agents to spot suspicious microexpressions.

When a SPOT-trained agent wound up harassing King Downing, a coordinator for the ACLU, Downing sued. The Government Accountability Office later determined that the government had deployed the program before it had sufficient data to determine whether it would work.

Nicholas West writes for Activist Post. Support us at Patreon for as little as $1 per month. Support us at Patreon. Follow us on Minds, Steemit, SoMee, BitChute, Facebook and Twitter. Ready for solutions? Subscribe to our premium newsletter Counter Markets.

Also Read: iBorderCtrl Fail: The EU’s Border Control ‘Lie Detector’ AI Is Hogwash


Activist Post Daily Newsletter

Subscription is FREE and CONFIDENTIAL
Free Report: How To Survive The Job Automation Apocalypse with subscription

Be the first to comment on "A.I. Lie Detectors Being Tested To Replace Human Border Guards"

Leave a comment