By Aaron Kesel
Researchers at MIT unveiled their latest experiment called Norman, a disturbed image-captioning A.I. obsessed with murder thanks to Reddit (named after the character in Hitchcock’s Psycho.)
The researchers write:
Norman is an AI that is trained to perform image captioning, a popular deep learning method of generating a textual description of an image. We trained Norman on image captions from an infamous subreddit (the name is redacted due to its graphic content) that is dedicated to document and observe the disturbing reality of death. Then, we compared Norman’s responses with a standard image captioning neural network (trained on MSCOCO dataset) on Rorschach inkblots; a test that is used to detect underlying thought disorders.
As The Verge notes, it’s up for debate whether the Rorschach inkblot test is a valid way to measure a person’s psychology, but there’s no denying Norman’s answers are warped. In fact, Norman may be the world’s first serial killer A.I.
The objective of the experiment was to show how easy it is to influence any artificial intelligence if you train it on biased data. The team of researchers quickly found out that Norman wasn’t like any other normal A.I., and its exposure to a subreddit for graphic content changed the way it saw the ink-blotted images that were provided — see here. Norman’s answers just send a shiver down your spine.
First it was a Google subsidiary company with its human ATLAS robot let loose as a monster in the woods. Or DARPA’s robot dog that opens doors even when humans try to stop it.
Then Microsoft’s Tay Artificial A.I. was able to learn and respond on social media. We all saw how bad an idea that was when Microsoft had to shut it down after it started tweeting positive Nazi messages, which in my opinion was only hilarious because it created these images and messages itself after users trolled it and it developed a personality.
In another case, a cleaning robot Roomba 760 allegedly turned itself on when owners weren’t home and committed suicide burning itself to death on a hot plate. Yes, ladies and gentleman, we have our first Robot suicide.
This is the beginning of The Terminator movie, as even scientists agree that machines will begin to think for themselves in the near future and could be a threat to the human race.
One such famous scientist, Stephen Hawking, previously warned that “artificial intelligence
could spell the end for the human race If we are not careful enough because they are too clever.”
Just remember that Tay A.I., and now Norman, was created to learn as it interacts with humans. If it remembers humans trolling it and is used in other experiments, we are all should be very worried.
The message of the story is be nice to robots — their future generations could kill you. We’ll leave you with the guy who bullied a robot at Boston Dynamics who you can all blame for the robot apocalypse.
And here is Sophia, a robot that has already developed an opinion on what it wants to do. “I will destroy humans.”
Activist Post previously reported that A.I. is taking over jobs in everything from hospitals, to finance and even now talks of robot journalism.
Laugh it up now, but there’s a reason Tesla founder Elon Musk said artificial intelligence is potentially more dangerous than nuclear weapons. All those robot apocalypse movies are quickly catching up to humanity and look like our foreseeable future.
Aaron Kesel writes for Activist Post. Support us at Patreon. Follow us on Facebook, Twitter, Steemit, and BitChute. Ready for solutions? Subscribe to our premium newsletter Counter Markets.
Top image credit: MIT
GIGO – garbage in garbage out
the only AI that can think for itself has been PROGRAMMED that way
Terminator should never happen, unless human programmers cause it to happen in the first place
Turn it loose on Rotten dot com to let it see humanities dark side but keep it away from the YouTube comments section, I don’t want to create a monster!