Both Meta and Microsoft’s AI image generators went viral this week for responding to prompts like “Karl marx large breasts” and fictional characters doing 9/11.
“I don’t think anyone involved has thought anything through,” X (formally Twitter) user Pioldes posted, along with screenshots of AI-generated stickers of child soldiers and Justin Trudeau’s buttocks.
One Bing user went further, and posted a thread of Kermit committing a variety of violent acts, from attending the Jan. 6 Capitol riot, to assassinating John F. Kennedy, to shooting up the executive boardroom of ExxonMobil.
In the race to one-up competitors’ AI features, tech companies keep launching products without effective guardrails to prevent their models from generating problematic content.
Messing around with roundabout prompts to make generative AI tools produce results that violate their own content policies is referred to as jailbreaking (the same term is used when breaking open other forms of software, like Apple’s iOS).
Midjourney bans pornographic content, going as far as blocking words related to the human reproductive system, but users are still able to bypass the filters and generate NSFW images.
The original article contains 1,221 words, the summary contains 181 words. Saved 85%. I’m a bot and I’m open source!
This is the best summary I could come up with:
Both Meta and Microsoft’s AI image generators went viral this week for responding to prompts like “Karl marx large breasts” and fictional characters doing 9/11.
“I don’t think anyone involved has thought anything through,” X (formally Twitter) user Pioldes posted, along with screenshots of AI-generated stickers of child soldiers and Justin Trudeau’s buttocks.
One Bing user went further, and posted a thread of Kermit committing a variety of violent acts, from attending the Jan. 6 Capitol riot, to assassinating John F. Kennedy, to shooting up the executive boardroom of ExxonMobil.
In the race to one-up competitors’ AI features, tech companies keep launching products without effective guardrails to prevent their models from generating problematic content.
Messing around with roundabout prompts to make generative AI tools produce results that violate their own content policies is referred to as jailbreaking (the same term is used when breaking open other forms of software, like Apple’s iOS).
Midjourney bans pornographic content, going as far as blocking words related to the human reproductive system, but users are still able to bypass the filters and generate NSFW images.
The original article contains 1,221 words, the summary contains 181 words. Saved 85%. I’m a bot and I’m open source!