Deepfake Voice Attacks are Here to Put Detection to the Real-world Test

By Jim Nash

It’s put up or shut up time for biometric software companies and public researchers claiming they can detect deepfake voices.

Someone sent robocalls as part of disinformation tactic in the United States purporting to be President Joe Biden. It sounded like Biden telling people not to vote in a primary election, but it could have been AI. No one, not even vendors selling deepfake detection software, can agree.

Software maker ID R&D, a unit of Mitek, is stepping into the market, and responded to the previous big voice cloning scandal in the U.S., involving pop star Taylor Swift, with a video showing that its voice biometrics liveness code can differentiate real recordings from digital impersonation.

The electoral fraud attempt poses a different kind of challenge.

A Bloomberg article this week looked at what might have been the first deepfake audio dirty trick played on Biden. But no one knows if it was an actor or AI.

Citing two other detector makers, ElevenLabs and Clarity, Bloomberg could find no certainty.

ElevenLabs’ software found it unlikely that the misinformation attack was the result of biometric fraud. Not so, Clarity, which apparently found it 80 percent likely to be a deepfake.

Activist Post is Google-Free — We Need Your Support
Contribute Just $1 Per Month at Patreon or SubscribeStar

(ElevenLabs, which focuses on creating voices, became a unicorn. The company raised an $80 million series B this month, executives said the company is valued at more than $1 billion, according to Crunchbase.)

As is often the case, some hope springs from research, and in this case, it’s qualified.

A team of students and alums from University of California – Berkeley say that they have developed a method of detection that function with as little as no errors.

Of course, that’s in a lab setting and the research team feels the method will require “proper context,” to be understood.

The team gave a deep-learning model raw audio to process and extract multi-dimensional representations. The model uses these so-called embeddings to parse real from fake.

Source: Biometric Update

Jim Nash is a business journalist. His byline has appeared in The New York Times, Investors Business Daily, Robotics Business Review and other publications. You can find Jim on LinkedIn.

Become a Patron!
Or support us at SubscribeStar
Donate cryptocurrency HERE

Subscribe to Activist Post for truth, peace, and freedom news. Follow us on SoMee, Telegram, HIVE, Minds, MeWe, Twitter – X, Gab, and What Really Happened.

Provide, Protect and Profit from what’s coming! Get a free issue of Counter Markets today.


Activist Post Daily Newsletter

Subscription is FREE and CONFIDENTIAL
Free Report: How To Survive The Job Automation Apocalypse with subscription

Be the first to comment on "Deepfake Voice Attacks are Here to Put Detection to the Real-world Test"

Leave a comment