Amazon’s Facial Recognition Technology Can Now Detect Fear in People: Claim

By Jessica Corbett

Privacy advocates are responding with alarm to Amazon’s claim this week that the controversial cloud-based facial recognition system the company markets to law enforcement agencies can now detect “fear” in the people it targets.

“Amazon is going to get someone killed by recklessly marketing this dangerous and invasive surveillance technology to governments,” warned Evan Greer, deputy director of the digital rights group Fight for the Future, in a statement Wednesday.

Amazon Web Services detailed new updates to its system—called Rekognition—in an announcement Monday:

With this release, we have further improved the accuracy of gender identification. In addition, we have improved accuracy for emotion detection (for all 7 emotions: ‘Happy’, ‘Sad’, ‘Angry’, ‘Surprised’, ‘Disgusted’, ‘Calm’, and ‘Confused’) and added a new emotion: ‘Fear’. Lastly, we have improved age range estimation accuracy; you also get narrower age ranges across most age groups.

Pointing to research on the technology conducted by the ACLU and others, Fight for the Future’s Greer said that “facial recognition already automates and exacerbates police abuse, profiling, and discrimination.”

“Now Amazon is setting us on a path where armed government agents could make split second judgements based on a flawed algorithm’s cold testimony. Innocent people could be detained, deported, or falsely imprisoned because a computer decided they looked afraid when being questioned by authorities,” she warned. “The dystopian surveillance state of our nightmares is being built in plain sight—by a profit-hungry corporation eager to cozy up to governments around the world.”

VICE reported that “despite Amazon’s bold claims, the efficacy of emotion recognition is in dispute. A recent study reviewing over 1,000 academic papers on emotion recognition found that the technique is deeply flawed—there just isn’t a strong enough correlation between facial expressions and actual human emotions, and common methods for training algorithms to spot emotions present a host of other problems.”

Amid mounting concerns over how police and other agencies may use and abuse facial recognition tools, Fight for the Future launched a national #BanFacialRecognitioncampaign last month. Highlighting that there are currently no nationwide standards for how agencies and officials can use the emerging technology, the group calls on federal lawmakers to ban the government from using it at all.

Fight for the Future reiterated their demand Wednesday, in response to Amazon’s latest claims. Although there are not yet any federal regulations for the technology, city councils—from San Francisco to Somerville, Massachusetts—have recently taken steps to outlaw government use such systems.

Activists are especially concerned about the technology in that hands of federal agencies such as U.S. Immigration and Customs Enforcement (ICE) and Customs and Border Patrol (CBP), whose implementation of the Trump administration’s immigration policies has spurred condemnation from human rights advocates the world over.

Read From Activist Post: U.S. Customs Continues to “Modernize” Its System to Include Biometrics For All Passengers

Civil and human rights advocates have strongly urged Amazon—as well as other developers including Google and Microsoft—to refuse to sell facial recognition technology to governments in the United States and around the world, emphasizing concerns about safety, civil liberties, and public trust.

However, documents obtained last year by the Project on Government Oversight revealed that in the summer of 2018, Amazon pitched its Rekognition system to the Department of Homeland Security—which oversees ICE and CBP—over the objection of Amazon employees. More recently, the corporation has been targeted by protesters of the Trump administration’s immigration agenda for Amazon Web Service’s cloud contracts with ICE.

In a July report on Amazon’s role in the administration’s immigration policies, Al Jazeera explained that “U.S. authorities manage their immigration caseload with Palantir software that facilitates tracking down would-be deportees. Amazon Web Services hosts these databases, while Palantir provides the computer program to organize the data.”

“Amazon provides the technological backbone for the brutal deportation and detention machine that is already terrorizing immigrant communities,” Audrey Sasson, executive director of Jews For Racial and Economic Justice, told VICE Tuesday. “[A]nd now Amazon is giving ICE tools to use the terror the agency already inflicts to help agents round people up and put them in concentration camps.”

“Just as IBM collaborated with the Nazis, Amazon and Palantir are collaborating with ICE today,” added Sasson. “They’ve chosen which side of history they want to be on.”

Also Read From Activist Post:

Russian Company Adds Pre-Crime Emotional Recognition Tech To Surveillance Cameras

CREEPY: Amazon and Facebook Both Want To Read Human Emotions

Top University Artificial Intelligence Experts Warn of “Technical Flaws” In Pre-Crime Police Systems


By Jessica Corbett | CommonDreams.org

This article was sourced from The Mind Unleashed.

Subscribe to Activist Post for truth, peace, and freedom news. Follow us on Minds, Twitter, Steemit, and SoMee. Become an Activist Post Patron for as little as $1 per month.

Provide, Protect and Profit from what’s coming! Get a free issue of Counter Markets today.


Activist Post Daily Newsletter

Subscription is FREE and CONFIDENTIAL
Free Report: How To Survive The Job Automation Apocalypse with subscription

Be the first to comment on "Amazon’s Facial Recognition Technology Can Now Detect Fear in People: Claim"

Leave a comment