An artificial intelligence researcher says he has created "the worst AI ever" capable of making tens of thousands of hateful posts online.
YouTuber Yannic Kilcher says he trained an AI using the Politically Incorrect message board on the website 4chan, a controversial forum infamous for its 'hateful' anonymous posts.
Using 3.3 million threads from three years on the site, the bot—called GPT-4chan—learned how to talk on the website. Its creator then 'unleashed' 9 versions of the AI back onto 4chan, where it went on to make thousands and thousands of offensive, cruel and even 'violent' posts in less than one day.
Using a language model, the AI was able to learn how to write posts that are 'indistinguishable' from those written by humans.
Apparently, 4chan users started noticing the bot's posts and speculating who was behind the posts. Kilcher says that initially nobody thought it was a chatbot because of how realistic it was.
"The model is quite vile, I have to warn you," Kilcher said. "It's essentially the same as if you went to the website and interacted with users there."
However, he said it "was good in a terrible sense. It perfectly encapsulated the mix of offensiveness, nihilism, trolling, and deep distrust of any information whatsoever that permeates most posts [on the site].
Apple divides fans with plans to completely change iPhone lock screen design
Users of 4chan took to YouTube to share their experiences of interacting with the bot. One user wrote: "I just had it respond to 'hi' and it started ranting about illegal immigrants."
The experiment highlighted the potential scale of AI being used online to spread misinformation and hate speech. One commenter wrote: "Bravo you fooled me. It's really scary to think that I have been sharing memes with literal AIs because I doubt that you were the only one."
One user, Arnaud Wanet, wrote: "This can be weaponised for political purposes, imagine how easy one can sway an election outcome with this one way or another."
How AI helped Top Gun fans hear Val Kilmer's voice again after throat cancer battle
Kilcher warned people not to try the model at home, and the model was criticised for its lack of AI ethics.
One AI expert, Dr Lauren Oakden-Rayner, argued that the experiment "would never pass a human research ethics board."
She continued: "Medical research has a strong ethics culture because we have an awful history of causing harm to people, usually from disempowered groups… [Kilcher] performed human experiments without informing users, without consent or oversight. This breaches every principle of human research ethics."
Instagram to issue missing child 'amber alerts' to people's phones across the UK
Another AI researcher, Arthur Holland Michel, told Motherboard "Building a system capable of creating unspeakably horrible content, using it to churn out tens of thousands of mostly toxic posts on a real message board, and then releasing it to the world so that anybody else can do the same, it just seems—I don't know—not right."
Kilcher argued that it was a prank and that the comments created by the AI weren't any worse than what was already on 4chan. He said: "Nobody on 4chan was even a bit hurt by this. I invite you to go spend some time on /pol/ and ask yourself if a bot that just outputs the same style is really changing the experience."
Source: Read Full Article