1,000 Phrases That Incorrectly Trigger Alexa, Siri, and Google Assistant to Start Listening and Recording

By B.N. Frank

Not only do home assistants emit harmful electromagnetic radiation which can make people and animals sick (see 1, 2, 3, 4, 5, 6, 7, 8) AND increase their cancer risk (see 1, 2, 3, 4) – they have been known to collect data on users without their knowledge or their permission.

For those who want to use them anyway – Ars Technica provides details on what words may trigger unwanted surveillance.

As Alexa, Google Home, Siri, and other voice assistants have become fixtures in millions of homes, privacy advocates have grown concerned that their near-constant listening to nearby conversations could pose more risk than benefit to users. New research suggests the privacy threat may be greater than previously thought.

The findings demonstrate how common it is for dialog in TV shows and other sources to produce false triggers that cause the devices to turn on, sometimes sending nearby sounds to Amazon, Apple, Google, or other manufacturers. In all, researchers uncovered more than 1,000 word sequences—including those from Game of Thrones, Modern Family, House of Cards, and news broadcasts—that incorrectly trigger the devices.

“The devices are intentionally programmed in a somewhat forgiving manner, because they are supposed to be able to understand their humans,” one of the researchers, Dorothea Kolossa, said. “Therefore, they are more likely to start up once too often rather than not at all.”

That which must not be said

Examples of words or word sequences that provide false triggers include

Alexa: “unacceptable,” “election,” and “a letter”

Google Home: “OK, cool,” and “Okay, who is reading”

Siri: “a city” and “hey jerry”

Microsoft Cortana: “Montana”

The two videos below show a GoT character saying “a letter” and Modern Family character uttering “hey Jerry” and activating Alexa and Siri, respectively.

[…]

Update, 7/2020, 9:06 AM California time: More than 36 hours after Ars asked for comment, Amazon provided the following statement:

Unfortunately, we have not been given the opportunity to review the methodology behind this study to validate the accuracy of these claims. However, we can assure you that we have built privacy deeply into the Alexa service, and our devices are designed to wake up only after detecting the wake word. Customers talk to Alexa billions of times a month and in rare cases devices may wake up after hearing a word that sounds like “Alexa” or one of the other available wake words. By design, our wake word detection and speech recognition get better every day – as customers use their devices, we optimize performance. We continue to invest in improving our wake word detection technology and encourage the researchers to share their methodology with us so we can respond in further detail.

Read full article

Activist Post Recommended Book: The Age of Surveillance Capitalism

Activist Post reports regularly about unsafe technology. For more information, visit our archives.

Image: Truth Theory

Become a Patron!

Subscribe to Activist Post for truth, peace, and freedom news. Send resources to the front lines of peace and freedom HERE! Follow us on SoMee, HIVE, Parler, Flote, Minds, and Twitter.

Provide, Protect and Profit from what’s coming! Get a free issue of Counter Markets today.


Activist Post Daily Newsletter

Subscription is FREE and CONFIDENTIAL
Free Report: How To Survive The Job Automation Apocalypse with subscription

Be the first to comment on "1,000 Phrases That Incorrectly Trigger Alexa, Siri, and Google Assistant to Start Listening and Recording"

Leave a comment