New Facial Recognition Research Reveals More “really stark” Racial Disparities; Professor Recommends Pausing Its Use

By B.N. Frank

Facial recognition technology can be biased, make errors, and it’s privacy invasive (see 1, 2, 3, 4, 5).  Opposition to its use continues to increase worldwide and now includes an assistant professor in Maryland.

From GovTech:


University of Maryland Prof Urges Pause on Facial Recognition

Lauren Rhue, an assistant professor of information systems at the University of Maryland, says human intervention is necessary to mitigate bias in technologies from Amazon Rekognition, Face++ and Microsoft.

Maya Lora, Baltimore Sun

(TNS) — Lauren Rhue researches the fast-paced world of artificial intelligence and machine-learning technology. But she wants everyone in it to slow down.

Rhue, an assistant professor of information systems at the University of Maryland Robert H. Smith School of Business, recently audited emotion-recognition technology within three facial recognition services: Amazon Rekognition, Face++ and Microsoft. Her research revealed what Rhue called “really stark” racial disparities.

Amazon Rekognition is offered for use to other companies. Face++ is used in identity verification. Microsoft plans to stop using its facial recognition technology this year, including the emotion-recognition tools.

Rhue collected photos of Black and white NBA players from the 2016 season, controlling for the degree to which they were smiling. She then ran those photos through the facial recognition software.

In general, the models assigned more negative emotions to Black players, Rhue found. Additionally, if the players had ambiguous facial expressions, the Black players were more likely to be assumed to have a negative facial expression, while white players were more likely to be “given the benefit of the doubt.”

“I think that we should all take a step back, and think, do we need to analyze faces in this way?” Rhue said.

Rhue, 39, is not the first to explore racial disparity in AI systems. For example, MIT grad student Joy Buolamwini has given TED Talks on her experience with a facial analysis software that couldn’t detect her face because the algorithm hadn’t been coded to identify a broad enough range of skin tones and facial structures.

Read full article


Activist Post reports regularly about facial recognition and other privacy invasive and unsafe technology.  For more information visit our archives and the following websites.

Image: Pixabay

Become a Patron!
Or support us at SubscribeStar
Donate cryptocurrency HERE

Subscribe to Activist Post for truth, peace, and freedom news. Follow us on SoMee, Telegram, HIVE, Flote, Minds, MeWe, Twitter, Gab, What Really Happened and GETTR.

Provide, Protect and Profit from what’s coming! Get a free issue of Counter Markets today.


Activist Post Daily Newsletter

Subscription is FREE and CONFIDENTIAL
Free Report: How To Survive The Job Automation Apocalypse with subscription

Be the first to comment on "New Facial Recognition Research Reveals More “really stark” Racial Disparities; Professor Recommends Pausing Its Use"

Leave a comment