Facial Recognition Tech Tested By UK Police Was Wrong 96% Of The Time According To Big Brother Watch

By Aaron Kesel

Facial recognition is highly flawed.  Activist Post has consistently reported numerous studies finding that the technology’s accuracy isn’t all it’s marketed to be. Now, a watchdog observing UK Metropolitan Police trials has stated the technology has misidentified members of the public, including a 14-year-old black child in a school uniform who was stopped and fingerprinted by police, as potential criminals in as much as 96 percent of scans, according to Big Brother Watch in a press release.

In eight trials in London between 2016 and 2018, the technology gave “false positives” that wrongly identified individuals as crime suspects when an individual passed through an area with a facial recognition camera.

Big Brother Watch, the watchdog organization that received the data through a freedom of information request, demanded police drop using the technology. Big Brother Watch further warned of the Orwellian consequences of using it, arguing that it “breaches fundamental human rights protecting privacy and freedom of expression.”

“This is a turning point for civil liberties in the UK. If police push ahead with facial recognition surveillance, members of the public could be tracked across Britain’s colossal CCTV networks,” Director Silkie Carlo said. “For a nation that opposed ID cards and rejected the national DNA database, the notion of live facial recognition turning citizens into walking ID cards is chilling.”

Further according to Big Brother Watch, Police scored a 100% misidentification rate in two separate deployments at Westfield shopping centers in Stratford, London twice. It is a horrifying thought that this technology is now being used to harass citizens as they shop.

Of course, we know that facial recognition technology is currently or will be tested in UK supermarkets for the first time to verify the age of citizens buying alcohol and cigarettes at special self-checkout machines, as Activist Post reported.

The company responsible for the devices to be used in supermarkets, according to the Telegraph, is U.S. company NCR which makes self-checkout machines for Asda, Tesco, and other UK supermarkets.

NCR has announced the integration of facial recognition technology from Yoti with its “FastLane” tills within supermarkets.

Fastlanes are currently used by UK retailers Tesco, Sainsbury’s, Marks & Spencer, Boots, and WHSmith. While not all these retailers will be a part of the pilot test program, it’s important to note how widespread this could be.

Meanwhile, hundreds of retail stores and soon thousands are investigating using another biometric facial recognition software called FaceFirst to build a database of shoplifters as a means of anti-theft, Activist Post reported.

FaceFirst is designed to scan faces as far as 50 to 100 feet away. As customers walk through a store entrance, the video camera captures repetitious images of each shopper and chooses the clearest one to store.

The software then analyzes that image and compares it to a database of “bad customers” that the retailer has compiled; if there is a match, the software sends an alert to store employees that a “high risk” customer has entered the door.

The future of shopping seems to allude to having biometric scanners written all over it, a worrying prospect for privacy enthusiasts.

Several privacy advocate groups, attorneys, and even recently Microsoft, which also markets its own facial recognition system, have all raised concerns over the technology, pointing to issues of consent, racial profiling, and the potential to use images gathered through facial recognition cameras as evidence of criminal guilt by law enforcement.

“We don’t want to live in a world where government bureaucrats can enter in your name into a database and get a record of where you’ve been and what your financial, political, sexual, and medical associations and activities are,” Jay Stanley, an attorney with ACLU, told BuzzFeed News about the use of facial recognition cameras in retail stores. “And we don’t want a world in which people are being stopped and hassled by authorities because they bear resemblance to some scary character.”

The technology currently has a lot of problems; Activist Post recently reported how Amazon’s own facial “Rekognition” software erroneously and hilariously identified 28 members of Congress as people who have been arrested for crimes according to the ACLU. Maybe the technology was trying to tell us something? But then it should have labeled more than just African American members of Congress as criminals, unless the technology has a racial bias, or perhaps this is just more evidence of how inaccurate the technology is.

Activist Post previously reported on another test of facial recognition technology in Britain which resulted in 35 false matches and 1 erroneous arrest. So the technology is demonstrated to be far from foolproof.

Many likely laughed about the paranoid nature this writer has expressed when it comes to facial recognition technology; however, vindication came swiftly recently when Amazon announced it wanted to create a “Crime News Network” to monitor neighborhoods with its Ring doorbell facial recognition cameras. At this point, they are literally just creating George Orwell’s 1984 or reinventing the Nazi Stasi.

Amazon employees who are against the company selling facial recognition technology to the government have protested the company’s decision. Over 20 groups of shareholders have sent several letters to Amazon CEO Jeff Bezos urging him to stop selling the company’s face recognition software to law enforcement.

“We are concerned the technology would be used to unfairly and disproportionately target and surveil people of color, immigrants, and civil society organizations,” the shareholders, which reportedly include Social Equity Group and Northwest Coalition for Responsible Investment, wrote. “We are concerned sales may be expanded to foreign governments, including authoritarian regimes.”

Another letter was just sent in January 2019, organized by Open Mic, a nonprofit organization focused on corporate accountability, and was filed by the Sisters of St. Joseph of Brentwood; both letters warned the technology poses “potential civil and human rights risks.”

Numerous civil rights organizations have also co-signed a letter demanding Amazon stop assisting government surveillance; and several members of Congress have expressed concerns about the partnerships.

Several lawmakers have even chimed in to voice concerns about Amazon’s facial recognition software, expressing worry that it could be misused, The Hill reported.

The American Civil Liberties Union (ACLU) obtained hundreds of pages of documents showing Amazon offering the software to law enforcement agencies across the country.

In a 2018 report,  the ACLU called Amazon’s facial recognition project a “threat to civil liberties.”

Amazon responded by essentially shrugging off the employees’ and shareholder concerns by the head of the company’s public sector cloud computing business, stating that the team is “unwaveringly” committed to the U.S. government.

“We are unwaveringly in support of our law enforcement, defense and intelligence community,”  Teresa Carlson, vice president of the worldwide public sector for Amazon Web Services, said July 20th at the Aspen Security Forum in Colorado, FedScoop reported.

Amazon has since released an update claiming to have fixed all of the problems with lighting that caused inaccuracy to its systems according to the company.

This also follows a report by the U.S. Government Accountability Office (GAO) that the facial recognition technology the FBI is using for the Next Generation Identification-Interstate Photo System failed privacy and accuracy tests, as Activist Post reported.

In 2018 it was reported that the FBI and other law enforcement agencies were using this same Amazon Facial Rekognition technology to sift through surveillance data.

Defense One reports that “AI-Enabled Cameras That Detect Crime Before it Occurs Will Soon Invade the Physical World” are in the works and on display at ISC West, a recent security technology conference in Las Vegas.

Activist Post has previously reported in its own way that the rise of facial recognition technology is inevitable and, as a result, the death of one’s privacy is sure to come with it.

This writer continues to focus on facial recognition technology. From Amazon helping law enforcement with its Facial Rekogntion software, DHS wanting to use it for border control, to the Olympics wanting to use the tech for security.

It’s now been reported that facial recognition has evolved and researchers at the University of Bradford have found that “facial recognition technology works even when only half a face is visible,” according to EurekAlert. Although, this upgraded technology hasn’t been tested by police to this writer’s knowledge, and let’s hope that it never is, for if it does civil liberties and privacy will cease to exist.

Elsewhere in the world, facial recognition and the use of biometrics can be seen all over starting to emerge. In Malta, Prime Minister Joseph Muscat recently confirmed plans to implement facial recognition into the CCTV surveillance cameras around the country’s zones.

“The police are doing a good job but there’s a lot of work that still needs to be done to step up enforcement,” Muscat said in an interview on ONE Radio today. “We are looking into safe city concepts to prevent antisocial behaviour, whereby CCTV systems with technology that can identify law-breakers can do away with the need to have police stationed 24/7 in certain areas.”

Meanwhile, China is planning to merge its 170+ million security cameras with artificial intelligence and facial recognition technology to create a mega-surveillance state. This compounds with China’s “social credit system” that ranks citizens based on their behavior, and rewards and punishes depending on those scores.

Consent to be identified by the government whenever and wherever we go is approval to have the government decide whether, when, and where we are allowed to travel. Put bluntly:  it is very dangerous

The scary part is that intelligence agencies would be able to use their surveillance dragnet interlinked into CCTV cameras and companies like Facebook that utilize the technology to track someone’s location in real-time.

For more on facial recognition technology and what’s to come for our future, see this writer’s previous article “The Rise Of Facial Recognition Technology Is Now Inevitable.”

This writer’s not sure what’s worse, the technology being inaccurate or the evolution of the technology past these inaccuracies. In other words, citizens being mindlessly harassed or the soon-to-be manifested surveillance state that several private companies are creating that won’t just be used by police, but will also be used by businesses as a means of anti-theft as has been reported before. Which begs the question of protecting citizens’ privacy and how this database is secured from an information technology point of view, as well as how long these companies are allowed to retain data that is obtained without a warrant with a simple scan of a person’s face simply by walking by the facial recognition-equipped cameras in a public place like a shopping center.

Image credit: EFF.org

Aaron Kesel writes for Activist Post. Support us at Patreon. Follow us on Minds, Steemit, SoMee, BitChute, Facebook and Twitter. Ready for solutions? Subscribe to our premium newsletter Counter Markets.

Provide, protect and profit from what is coming! Get a free issue of Counter Markets today.

Activist Post Daily Newsletter

Subscription is FREE and CONFIDENTIAL
Free Report: How To Survive The Job Automation Apocalypse with subscription

Be the first to comment on "Facial Recognition Tech Tested By UK Police Was Wrong 96% Of The Time According To Big Brother Watch"

Leave a comment