A.I. Fueled Online Game Started Generating Disturbing Stories — Including Ones Featuring Kids

By B.N. Frank

Artificial Intelligence (AI) technology has been criticized for replacing human jobs (see 1, 2) as well as being used for applications that some consider to be unethical and harmful (see 1, 2, 3, 4).  Add another awful application to the list.

From Wired:


It Began as an AI-Fueled Dungeon Game. It Got Much Darker

The game touted its use of the GPT-3 text generator. Then the algorithm started to generate disturbing stories, including sex scenes involving children.

In December 2019, Utah startup Latitude launched a pioneering online game called AI Dungeon that demonstrated a new form of human-machine collaboration. The company used text-generation technology from artificial intelligence company OpenAI to create a choose-your-own adventure game inspired by Dungeons & Dragons. When a player typed out the action or dialog they wanted their character to perform, algorithms would craft the next phase of their personalized, unpredictable adventure.

Last summer, OpenAI gave Latitude early access to a more powerful, commercial version of its technology. In marketing materials, OpenAI touted AI Dungeon as an example of the commercial and creative potential of writing algorithms.

Then, last month, OpenAI says, it discovered AI Dungeon also showed a dark side to human-AI collaboration. A new monitoring system revealed that some players were typing words that caused the game to generate stories depicting sexual encounters involving children. OpenAI asked Latitude to take immediate action. “Content moderation decisions are difficult in some cases, but not this one,” OpenAI CEO Sam Altman said in a statement. “This is not the future for AI that any of us want.”

Latitude turned on a new moderation system last week—and triggered a revolt among its users. Some complained it was oversensitive and that they could not refer to a “8-year-old laptop” without triggering a warning message. Others said the company’s plans to manually review flagged content would needlessly snoop on private, fictional creations that were sexually explicit but involved only adults—a popular use case for AI Dungeon.

In short, Latitude’s attempt at combining people and algorithms to police content produced by people and algorithms turned into a mess. Irate memes and claims of canceled subscriptions flew thick and fast on Twitter and AI Dungeon’s official Reddit and Discord communities.

Read full article

As if we needed another reason to limit kids’ use of screens (see 1, 2, 3, 4, 5, 6)…

 



Activist Post reports regularly about unsafe technology.  For more information, visit our archives.

Also Read: These eerie portraits were painted by a very disturbed AI

Become a Patron!
Or support us at SubscribeStar
Donate cryptocurrency HERE

Subscribe to Activist Post for truth, peace, and freedom news. Follow us on Telegram, SoMee, HIVE, Flote, Minds, MeWe, Twitter, Gab, Ruqqus and What Really Happened.

Provide, Protect and Profit from what’s coming! Get a free issue of Counter Markets today.


Activist Post Daily Newsletter

Subscription is FREE and CONFIDENTIAL
Free Report: How To Survive The Job Automation Apocalypse with subscription

Be the first to comment on "A.I. Fueled Online Game Started Generating Disturbing Stories — Including Ones Featuring Kids"

Leave a comment

Your email address will not be published.


*