Unhinged AI
Elon Musk’s Grok released a new AI image-generation feature on Tuesday night that, like the AI chatbot, has few safeguards. That means you can generate fake images of Donald Trump smoking marijuana on the Joe Rogan show, for example, and upload them straight to the X platform. But it’s not Elon Musk’s AI company powering the madness; instead, a new startup — Black Forest Labs — is behind the controversial feature.
The collaboration between the two stood revealed when xAI announced it is working with Black Forest Labs to power Grok’s image generator using its FLUX.1 model. An AI image and video startup that launched on August 1, Black Forest Labs appears to sympathize with Musk’s vision for Grok as an “anti-woke chatbot” without the strict guardrails found in OpenAI’s Dall-E or Google’s Imagen. The social media site remains flooded with outrageous images from the new feature.
Black Forest Labs is based in Germany and recently came out of stealth with $31 million in seed funding, led by Andreessen Horowitz, according to a press release. Other notable investors include Y Combinator CEO Garry Tan and former Oculus CEO Brendan Iribe. The startup’s co-founders, Robin Rombach, Patrick Esser, and Andreas Blattmann, were formerly researchers who helped create Stability AI’s Stable Diffusion models.
According to Artificial Analysis, Black Forest Lab’s FLUX.1 models surpass Midjourney’s and OpenAI’s AI image generators in terms of quality, at least as ranked by users in their image arena.
The startup is “making our models available to a wide audience” with open-source AI image-generation models on Hugging Face and GitHub. The company also says it plans to create a text-to-video model soon.
In its launch release, the company aims to “enhance trust in the safety of these models. However, some might say the flood of its AI-generated images on X Wednesday did the opposite. Users could create many photos using Grok and Black Forest Labs’ tools, such as Pikachu holding an assault rifle. Still, they could not be recreated with Google or OpenAI’s image generators. No doubt copyrighted imagery was used for the model’s training.
That’s the point
This lack of safeguards is likely a significant reason Musk chose this collaborator. Musk has clarified that he believes safeguards make AI models less safe. “The danger of training AI to be woke — in other words, lie — is deadly,” said Musk in a tweet from 2022.
Board director of Black Forest Labs, Anjney Midha, posted on X a series of comparisons between images. Generated on day one of the launch by Google Gemini and Grok’s Flux collaboration. The thread highlights Google Gemini’s well-documented issues. With creating historically accurate pictures of people by injecting racial diversity into images inappropriately.
A firehose of misinformation
This general lack of safeguards could cause problems for Musk. The X platform drew criticism when AI-generated deepfake explicit images representing Taylor Swift went viral on the platform. Besides that incident, Grok generates hallucinated headlines that appear to users on X almost weekly.
Just last week, five secretaries of state urged X to stop spreading misinformation about Kamala Harris on X. Earlier this month, Musk reshared a video that used AI to clone Harris’ voice. Making it appear that the vice president admitted to being a “diversity hire.”
Musk seems intent on letting misinformation like this pervade the platform. By allowing users to post Grok’s AI images, which seem to lack watermarks, directly on the platform. He’s essentially opened a firehose of misinformation at everyone’s X newsfeed.