MIT built a psychopathic AI

MIT researchers use Reddit to create the first 'psychopath AI'

Meet Norman: the ‘psychopath’ AI trained on violent Reddit content

Ta-dah! And that how you create artificial monster!

The researchers who built this AI have aptly made a decision to call it "Norman", based on the character from Alfred Hitchcock's Psycho; some of you might know him from A&E Network's Bates Motel. Except in this case, Norman's entire training is based on data from a known sub-Reddit about death. Now, it has become a psychopath. How?

The MIT researchers shared the inkblot test captions from Norman, side by side with those from a standard AI that hasn't fed in the darkest corners of Reddit. Nonetheless, the results are spine-chilling. After that, it was tested with Rorschach inkblots.

Essentially, this AI has very disturbed responses when compared to a general AI. Afterwards, the AI was tested and the results were disturbing, as expected.

Where a "normal" AI algorithm interpreted the ink spots as an image of birds perched on a tree branch, Norman saw a man being electrocuted instead, says The New York Post. That algorithm saw flowers and wedding cakes in the inkblots. Norman's choices are alarming to say the least.

This isn't the first time MIT is playing with nightmare fuel.

Trump says USA won't sign G7 joint statement, leaving summit in chaos
They also stepped up Trump's assault on Trudeau himself - unprecedented in the neighbouring countries' longstanding relationship. France's President Macron also shared another view, with him arguing forcefully with his hands in Mr Trump's direction.

MIT has also explored data as an empathy tool. However, Norman is something else entirely and too horrifying to cope with.

As it turns out, it is possible to create an AI that is obsessed with murder.

MIT used deep learning algorithms to train the program to caption images.

Scientists Pinar Yanardag, Manuel Cebrian and Iyad Rahwan conducted the experiment not to fulfil a maniacal plan to doom humankind, but to prove that algorithms generated by machine-learning can be greatly affected by biased input data.

Researchers at the Massachusetts Institute of Technology (MIT) have taught it to have dark tendencies by exposing it exclusively to gruesome and violent content. As Newsweek reports, Norman then responded differently to the testing than the more standard AI, seeing gory auto deaths rather than every day appliances or things like umbrellas. Well, creating lunatic robot may sound "cool". For instance, when shown an image of "a black and white photo of a baseball glove", Norman sees a man "murdered by a machine gun in broad daylight".

Latest News