European Data Protection Supervisor and Board Demand Total Ban on Automatic Recognition via A.I. Technology

By B.N. Frank

It’s overwhelming to think about how much we are all being watched and tracked.  This happens often without our knowledge or consent.  A few years ago, the term “Surveillance Capitalism” was introduced.  It’s the perfect name for a perfectly creepy practice.

Despite opposition, “Surveillance Capitalism” is only increasing since a growing number of businesses and other entities insist on tracking us for targeted advertising and other purposes (see 1, 2, 3, 4, 5, 6, 7).  Artificial Intelligence (A.I.) and other technologies are being used for this.

Complaints about A.I. inaccuracies and misuse are reported frequently (see 1, 2, 3, 4) – there’s even a “Hall of Shame”.  In Europe, data protection experts and agencies are endorsing bold steps to better protect citizens.

From Wired:


Europe Makes the Case to Ban Biometric Surveillance

Companies are racing to track everything about you. It could be a convenient way to reduce fraud—or seriously creepy and discriminatory.

Your body is a data goldmine. From the way you look to how you think and feel, firms working in the burgeoning biometrics industry are developing new and alarming ways to track everything we do. And, in many cases, you may not even know you’re being tracked.

WIRED UK

This story originally appeared on WIRED UK.

But the biometrics business is on a collision course with Europe’s leading data protection experts. Both the European Data Protection Supervisor, which acts as the EU’s independent data body, and the European Data Protection Board, which helps countries implement GDPR consistently, have called for a total ban on using AI to automatically recognize people.

“Deploying remote biometric identification in publicly accessible spaces means the end of anonymity in those places,” the heads of the two bodies, Andrea Jelinek and Wojciech Wiewiórowski, wrote in a joint statement at the end of June. AI shouldn’t be used in public spaces for facial recognition, gait recognition, fingerprints, DNA, voice, keystrokes, and other types of biometrics, they said. There should also be a ban on trying to predict people’s ethnicity, gender, and political or sexual orientation with AI.

But such calls fly in the face of the EU’s proposed regulations for AI. The rules, which were unveiled in April, say “remote biometric identification” is high-risk—meaning they’re allowed but face stricter controls than other uses of AI. Politicians across the EU will spend years debating the AI rules and biometric surveillance has already become one of the most contentious issues. When passed, the regulations will define how hundreds of millions of people are surveilled for decades to come. And the debate starts now.

Facial recognition has been controversial for years, but the real biometrics boom is taking aim at other parts of your body. Across the EU’s 27 member states, a number of companies have been developing and deploying biometric technologies that, in some cases, aim to predict people’s gender and ethnicity and recognize their emotions. In many cases the technology is already being used in the real world. However, using AI to make these classifications can be scientifically and ethically dubious. Such technologies risk invading people’s privacy or automatically discriminating against people.

Take Herta Security and VisionLabs, for example. Both firms develop facial-recognition technology for a variety of uses and say it could be deployed by law enforcement, retail, and transport industries. Documents from Herta Security, which is based in Barcelona, claim that its clients include police forces in Germany, Spain, Uruguay, Colombia, as well as airports, casinos, sports stadiums, shopping centers and hotel chains such as Marriott and Holiday Inn.

Critics point out that both Herta Security and VisionLabs claim parts of their systems can be used to track sensitive attributes. “A lot of the systems, even the ones that are being used to identify people, are relying on these potentially very harmful classifications and categorizations as the underlying logic,” says Ella Jakubowska, a policy adviser looking at biometrics at advocacy group European Digital Rights. The group is campaigning for a ban on biometric surveillance across Europe.

BioMarketing, Herta Security’s face-analysis tool, is billed as a way for shops and advertisers to learn about their customers and can “extract” everything from a person’s age and gender to whether they wear glasses, and even track their facial expressions. Herta Security says the technology is “ideal” for developing targeted advertising or helping companies understand who their customers are. The tool, Herta Security claims, can also classify people by “ethnicity.” Under GDPR, personal data that reveals “racial or ethnic origin” is considered sensitive, with strict controls in place around how it can be used. Jakubowska says she challenged Herta Security’s CEO on the use of ethnicity last year and that since then the company has removed the claim from its marketing material. It remains unclear whether the feature has been removed from the tool itself. Company documents hosted by third parties still list ethnicity as one of the characteristics that can be found using BioMarketing. Company documents from 2013 referred to it detecting “race” before it updated these to ethnicity. Herta Security, which has received more than 500,000 euros in EU funding and has been given an EU seal of excellence, did not respond to requests for comment.

Read full article


Activist Post reports regularly about unsafe technology.  For more information, visit our archives.

Image: Pixabay

Become a Patron!
Or support us at SubscribeStar
Donate cryptocurrency HERE

Subscribe to Activist Post for truth, peace, and freedom news. Follow us on Telegram, SoMee, HIVE, Flote, Minds, MeWe, Twitter, Gab, Ruqqus and What Really Happened.

Provide, Protect and Profit from what’s coming! Get a free issue of Counter Markets today.


Activist Post Daily Newsletter

Subscription is FREE and CONFIDENTIAL
Free Report: How To Survive The Job Automation Apocalypse with subscription

Be the first to comment on "European Data Protection Supervisor and Board Demand Total Ban on Automatic Recognition via A.I. Technology"

Leave a comment