AI Can “lie and BS” Just Like Humans, But Can’t Match Our Intelligence

By John Anderer

As artificial intelligence continues to dominate the headlines, more and more people find themselves wondering if they’ll one day be out of a job because of an AI-powered robot. The University of Cincinnati’s Anthony Chemero, a professor of philosophy and psychology, however, contends that popular conceptions regarding the intelligence of today’s AI have largely been muddled by linguistics. Put another way, Prof. Chemero explains that while AI is indeed intelligent, it simply cannot be intelligent in the way that humans are, even though “it can lie and BS like its maker.”

To start, the report details how ChatGPT and other AI systems are large language models (LLM) that are “trained” using massive amounts of data mined from the internet. Importantly, much of that information shares the biases of the people who posted the data in the first place.

“LLMs generate impressive text, but often make things up whole cloth,” Chemero states in a university release. “They learn to produce grammatical sentences, but require much, much more training than humans get. They don’t actually know what the things they say mean,” he says. “LLMs differ from human cognition because they are not embodied.”

The creators of LLMs call it “hallucinating” when the programs make things up, but Prof. Chemero claims “it would be better to call it ‘bullsh*tting.’” Why? LLMs work by constructing sentences through the repeated addition of the most statistically likely next word. These programs don’t know or care if what they are producing is actually true.

Even worse, with a little prodding, he adds, anyone can get an AI tool to say “nasty things that are racist, sexist and otherwise biased.”

Prof. Chemero stresses that LLMs are not intelligent in the way humans are intelligent because humans are embodied, meaning we’re living beings who are always surrounded by other living beings, as well as material and cultural environments.

Activist Post is Google-Free — We Need Your Support
Contribute Just $1 Per Month at Patreon or SubscribeStar

“This makes us care about our own survival and the world we live in,” he notes, commenting that LLMs aren’t really in the world and don’t actually care about, well, anything.

Ultimately, the main takeaway here is that LLMs are not intelligent in the way that humans are because they “don’t give a damn,” Prof. Chemero concludes, adding, “things matter to us. We are committed to our survival. We care about the world we live in.”

The study is published in Nature Human Behaviour.

You might also be interested in:

Source: Study Finds

John considers himself a pretty nice guy, and an even better writer. He is admittedly biased, though.

Image: Pixabay

Become a Patron!
Or support us at SubscribeStar
Donate cryptocurrency HERE

Subscribe to Activist Post for truth, peace, and freedom news. Follow us on SoMee, Telegram, HIVE, Minds, MeWe, Twitter – X, Gab, and What Really Happened.

Provide, Protect and Profit from what’s coming! Get a free issue of Counter Markets today.


Activist Post Daily Newsletter

Subscription is FREE and CONFIDENTIAL
Free Report: How To Survive The Job Automation Apocalypse with subscription

Be the first to comment on "AI Can “lie and BS” Just Like Humans, But Can’t Match Our Intelligence"

Leave a comment