Cybersecurity Expert Who Warned about 5G and IoT Now Warns about A.I.’s Potential for “mass spying”

By B.N. Frank

Over the years, countless experts have warned about privacy and cybersecurity risks with all “smart”, wireless, 5G and/or Internet of Things (IoT) devices and technologies.  They warned about privacy invasive and problematic utility “smart” meters – electric, gas, and water – as well.  In fact, approximately 6 months ago, the Biden Administration proposed that companies voluntarily put cybersecurity labels on IoT devices.  Additionally it promised to eventually address cybersecurity issues with increasingly unpopular “smart” meters.  Unfortunately now that everyone’s freaking out about Artificial Intelligence (A.I.) – and rightfully so – warnings about 5G, IoT, wireless, and/or “smart” technologies may take a backseat to warnings about A.I.  Of course, it’s good that some familiar voices are still around trying to wake people up.

From Ars Technica:


Due to AI, “We are about to enter the era of mass spying,” says Bruce Schneier

Schneier: AI will enable a shift from observing actions to interpreting intentions, en masse.

Benj Edwards

In an editorial for Slate published Monday, renowned security researcher Bruce Schneier warned that AI models may enable a new era of mass spying, allowing companies and governments to automate the process of analyzing and summarizing large volumes of conversation data, fundamentally lowering barriers to spying activities that currently require human labor.

Further Reading: Biden proposes new “Bill of Rights” to protect Americans from AI harms

In the piece, Schneier notes that the existing landscape of electronic surveillance has already transformed the modern era, becoming the business model of the Internet, where our digital footprints are constantly tracked and analyzed for commercial reasons. Spying, by contrast, can take that kind of economically inspired monitoring to a completely new level:

“Spying and surveillance are different but related things,” Schneier writes. “If I hired a private detective to spy on you, that detective could hide a bug in your home or car, tap your phone, and listen to what you said. At the end, I would get a report of all the conversations you had and the contents of those conversations. If I hired that same private detective to put you under surveillance, I would get a different report: where you went, whom you talked to, what you purchased, what you did.”

Activist Post is Google-Free — We Need Your Support
Contribute Just $1 Per Month at Patreon or SubscribeStar

Schneier says that current spying methods, like phone tapping or physical surveillance, are labor-intensive, but the advent of AI significantly reduces this constraint. Generative AI systems are increasingly adept at summarizing lengthy conversations and sifting through massive datasets to organize and extract relevant information. This capability, he argues, will not only make spying more accessible but also more comprehensive.

“This spying is not limited to conversations on our phones or computers,” Schneier writes. “Just as cameras everywhere fueled mass surveillance, microphones everywhere will fuel mass spying. Siri and Alexa and “Hey Google” are already always listening; the conversations just aren’t being saved yet.”

From action to intent

We’ve recently seen a movement from companies like Google and Microsoft to feed what users create through AI models for the purposes of assistance and analysis. Microsoft is also building AI copilots into Windows, which require remote cloud processing to work. That means private user data goes to a remote server where it is analyzed outside of user control. Even if run locally, sufficiently advanced AI models will likely “understand” the contents of your device, including image content.

Microsoft recently said, “Soon there will be a Copilot for everyone and for everything you do.”

Further Reading Bing Chat is now “Microsoft Copilot” in potentially confusing rebranding move

Despite assurances of privacy from these companies, it’s not hard to imagine a future where AI agents probing our sensitive files in the name of assistance start phoning home to help customize the advertising experience. Eventually, government and law enforcement pressure in some regions could compromise user privacy on a massive scale. Journalists and human rights workers could become initial targets of this new form of automated surveillance.

“Governments around the world already use mass surveillance; they will engage in mass spying as well,” writes Schneier. Along the way, AI tools can be replicated on a large scale and are continuously improving, so deficiencies in the technology now may soon be overcome.

What’s especially pernicious about AI-powered spying is that deep-learning systems introduce the ability to analyze the intent and context of interactions through techniques like sentiment analysis. It signifies a shift from observing actions with traditional digital surveillance to interpreting thoughts and discussions, potentially impacting everything from personal privacy to corporate and governmental strategies in information gathering and social control.

Further Reading Cops wanted to keep mass surveillance app secret; privacy advocates refused

In his editorial, Schneier raises concerns about the chilling effect that mass spying could have on society, cautioning that the knowledge of being under constant surveillance may lead individuals to alter their behavior, engage in self-censorship, and conform to perceived norms, ultimately stifling free expression and personal privacy.

So what can people do about it? Anyone seeking protections from this type of mass spying will likely need to look toward government regulation to keep it in check, since commercial pressures often trump technological safety and ethics. Biden’s Blueprint for an AI Bill of Rights mentions AI-powered surveillance as a concern. The EU’s draft AI Act also may obliquely address this issue to some extent, although apparently not directly, to our understanding. Neither are currently in legal effect.

Schneier isn’t optimistic on that front, however, closing with the line, “We could prohibit mass spying. We could pass strong data-privacy rules. But we haven’t done anything to limit mass surveillance. Why would spying be any different?” It’s a thought-provoking piece, and you can read the entire thing on Slate.

Benj Edwards is an AI and Machine Learning Reporter for Ars Technica. For over 16 years, he has written about technology and tech history for sites such as The Atlantic, Fast Company, PCMag, PCWorld, Macworld, How-To Geek, and Wired. In 2005, he created Vintage Computing and Gaming, a blog that pioneered tech history coverage online. He also hosted The Culture of Tech podcast and contributes to the Retronauts podcast. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC. Mastodon: benjedwards@mastodon.social Twitter: @benjedwards


Activist Post reports regularly about A.I. and other privacy-invasive and unsafe technologies.  For more information, visit our archives.

Image: Pixabay

Become a Patron!
Or support us at SubscribeStar
Donate cryptocurrency HERE

Subscribe to Activist Post for truth, peace, and freedom news. Follow us on SoMee, Telegram, HIVE, Minds, MeWe, Twitter – X, Gab, and What Really Happened.

Provide, Protect and Profit from what’s coming! Get a free issue of Counter Markets today.


Activist Post Daily Newsletter

Subscription is FREE and CONFIDENTIAL
Free Report: How To Survive The Job Automation Apocalypse with subscription

Be the first to comment on "Cybersecurity Expert Who Warned about 5G and IoT Now Warns about A.I.’s Potential for “mass spying”"

Leave a comment