New Artificial Intelligence Designed to Be Mentally Unstable: What Could Go Wrong?

By Jake Anderson

We tend to think of artificial intelligence entities as flawless intellects, early prototypes of the powerful ‘artilects’ futurists imagine will one day rule our world. We also tend to think of them as not being subject to unhappy thoughts or feelings. But one company has created an artificially intelligent machine-learning system that suffers from mental instability, or the AI equivalent, and the creators deliberately designed it to be unstable.

This tortured artist of an AI is called DABUS, short for “device for the autonomous bootstrapping of unified Sentience.” It was created by computer scientist Stephen Thaler, who used a technique called “generative adversarial networks” to mimic the extreme fluctuations in thought and emotion experienced by humans who suffer from mental illness. His Missouri-based company, Imagination Engines, developed a two-module process: Imagitron infused digital noise into a neural network, causing DABUS to generate new ideas and content; then, a second neural network, Perceptron, was integrated to assess DABUS’s output and provide feedback. Then they added their secret sauce.

This method of creating an echo chamber between neural networks is not new or unique. However, what Thaler and his company are using it for — deliberately tweaking an AI’s cognitive state to make its artistic output more experimental — is. Their process triggers ‘unhappy’ associations and fluctuations in rhythm. The result is an AI interface with symptoms of insanity.

“At one end, we see all the characteristic symptoms of mental illness, hallucinations, attention deficit and mania,” Thaler says, describing DABUS’s faculties and temperament. “At the other, we have reduced cognitive flow and depression.”

Thaler believes that integrating human-like problem-solving — and human-like flaws, such as mental illness — may significantly enhance an AI’s ability to create innovative artwork and subjective output. While everyone is familiar with the psychedelic and surreal canvases produced by Google’s Deep Dream algorithm, they may be uniquely impressed by the more measured and meditative work of DABUS.

Above: a few of DABUS’s surreal pieces, born of neural networks

Thaler also believes this technique will improve the abilities of AI in stock market predictions and autonomous robot decision-making. But what are the risks to infusing mental illness into a machine mind? Thaler believes there are limits but that psychological problems could be just as natural to AI as they are to humans.

“The AI systems of the future will have their bouts of mental illness,” Thaler speculates. “Especially if they aspire to create more than what they know.”

Creative Commons / Anti-Media / Report a typo


Activist Post Daily Newsletter

Subscription is FREE and CONFIDENTIAL
Free Report: How To Survive The Job Automation Apocalypse with subscription

5 Comments on "New Artificial Intelligence Designed to Be Mentally Unstable: What Could Go Wrong?"

  1. It’s just a guided random graphics engine

  2. AI Archons have already functioned on this planet for many eons that is why we are in this mess as well as being genetically impaired by our Progenitors. The Gordian Knot of our predicament is about to be compounded by turning on more of the same. You may as well as tape that cell phone to the side of your head and leave it running and see where it takes you.

  3. this is the easiest way to bypass Asimov’s three rules of robotics – by making a self-learning, auto-perpetuating machine “mentally” unstable, there is no chance that the three rules will be primary, and in fact, no harm to humans is “open” to interpretation

  4. Rightly mentioning the word “surreal”, because Salvador Dali’s original intention was to conquer the “irrational” by synthesising an abnormal and irrational mindset. Not sure if I would trust it with brain surgery, though…

  5. Is this how we discuss something like this now…? There’s 4 comments on this article right now…

    There seems to be an urgent relevancy to AI scenarios, and yet, even on Activist Post, there’s an apathy towards expression. Should I run to the nearest TEDx talk on YouTube…?
    .
    My two cents…and without much thought, I honestly wonder how and if any AI machine of the foreseeable future could be stopped, turned off and/or dismantled- when and if it were to become, say overzealous in it’s application of control over the sleeping/waking hours of us feeble humans. Dominion, I think, is the ultimate question. Where and how do we exist, or rather coexist, without disregarding our natural human impulses of love, friendship, expression…sleeping, eating, and having sex? Will we have to pay, as subordinates, for the time and place to be human, to remain human? And then, subject to what constraints or guidelines?

    Who or what will direct these AI machines…?

Leave a comment