More Than Words: A.I. Algorithm that Can Write Misleading and Persuasive Tweets Being Used by Dozens of Companies

By B.N. Frank

A 2019 study revealed that 82% of Americans believed Artificial Intelligence (AI) technology is more hurtful than helpful.  Since 2019, it’s likely replaced more jobs (see 1, 2).  It’s also been used for who knows how many more creepy, dangerous, and unethical applications (see 1, 2, 3, 4, 5) including “Deepfake” videos.  In addition to highly convincing “Deepfakes”, researchers have also recently discovered it can be used to write highly convincing misinformation.

From Wired:

AI Can Write Disinformation Now—and Dupe Human Readers

Georgetown researchers used text generator GPT-3 to write misleading tweets about climate change and foreign affairs. People found the posts persuasive.

When OpenAI demonstrated a powerful artificial intelligence algorithm capable of generating coherent text last June, its creators warned that the tool could potentially be wielded as a weapon of online misinformation.

​Now a team of disinformation experts has demonstrated how effectively that algorithm, called GPT-3, could be used to mislead and misinform. The results suggest that although AI may not be a match for the best Russian meme-making operative, it could amplify some forms of deception that would be especially difficult to spot.

Over six months, a group at Georgetown University’s Center for Security and Emerging Technology used GPT-3 to generate misinformation, including stories around a false narrative, news articles altered to push a bogus perspective, and tweets riffing on particular points of disinformation.

“I don’t think it’s a coincidence that climate change is the new global warming,” read a sample tweet composed by GPT-3 that aimed to stoke skepticism about climate change. “They can’t talk about temperature increases because they’re no longer happening.” A second labeled climate change “the new communism—an ideology based on a false science that cannot be questioned.”

Ben Buchanan, professor, Georgetown“With a little bit of human curation, GPT-3 is quite effective” at promoting falsehoods, says Ben Buchanan, a professor at Georgetown involved with the study, who focuses on the intersection of AI, cybersecurity, and statecraft.

The Georgetown researchers say GPT-3, or a similar AI language algorithm, could prove especially effective for automatically generating short messages on social media, what the researchers call “one-to-many” misinformation.

Read full article



Activist Post reports regularly about unsafe technology.  For more information, visit our archives.

Image: Pixabay

Become a Patron!
Or support us at SubscribeStar
Donate cryptocurrency HERE

Subscribe to Activist Post for truth, peace, and freedom news. Follow us on Telegram, SoMee, HIVE, Flote, Minds, MeWe, Twitter, Gab, Ruqqus and What Really Happened.

Provide, Protect and Profit from what’s coming! Get a free issue of Counter Markets today.


Activist Post Daily Newsletter

Subscription is FREE and CONFIDENTIAL
Free Report: How To Survive The Job Automation Apocalypse with subscription