AI 🤖

Meet Black Forest Labs, the startup powering Elon Musk’s unhinged AI image generator

Elon Musk’s Grok released a new AI image-generation feature on Tuesday night that, just like the AI chatbot, has very few safeguards. That means you can generate fake images of Donald Trump smoking weed on the Joe Rogan show, for example, and upload it straight to the X platform. But it’s not really Elon Musk’s AI company powering the madness; rather, a new startup — Black Forest Labs — is the outfit behind the controversial feature.

The collaboration between the two was revealed when xAI announced it is working with Black Forest Labs to power Grok’s image generator using its FLUX.1 model. An AI image and video startup that launched on August 1, Black Forest Labs appears to sympathize with Musk’s vision for Grok as an “anti-woke chatbot,” without the strict guardrails found in OpenAI’s Dall-E or Google’s Imagen. The social media site is already flooded with outrageous images from the new feature.

Black Forest Labs is based in Germany and recently came out of stealth with $31 million in seed funding, led by Andreessen Horowitz, according to a press release. Other notable investors include Y Combinator CEO Garry Tan and former Oculus CEO Brendan Iribe. The startup’s co-founders, Robin Rombach, Patrick Esser, and Andreas Blattmann, were formerly researchers who helped create Stability AI’s Stable Diffusion models.

According to Artificial Analysis, Black Forest Lab’s FLUX.1 models surpass Midjourney’s and OpenAI’s AI image generators in terms of quality, at least as ranked by users in their image arena.

The startup says it is “making our models available to a wide audience,” with open source AI image-generation models on Hugging Face and GitHub. The company says it plans to create a text-to-video model soon, as well.

https://twitter.com/Esqueer_/status/1823789104879800368?ref_src=twsrc%5Etfw” rel=”nofollow

In its launch release, the company says it aims to “enhance trust in the safety of these models”; however, some might say the flood of its AI generated images on X Wednesday did the opposite. Many images users were able to create using Grok and Black Forest Labs’ tool, such as Pikachu holding an assault rifle, were not able to be re-created with Google or OpenAI’s image generators. There’s certainly no doubt that copyrighted imagery was used for the model’s training.

That’s kind of the point

This lack of safeguards is likely a major reason Musk chose this collaborator. Musk has made clear that he believes safeguards actually make AI models less safe. “The danger of training AI to be woke — in other words, lie — is deadly,” said Musk in a tweet from 2022.

Board director of Black Forest Labs, Anjney Midha, posted on X a series of comparisons between images generated on day one of launch by Google Gemini and Grok’s Flux collaboration. The thread highlights Google Gemini’s well-documented issues with creating historically accurate images of people, specifically by injecting racial diversity into images inappropriately.

“I’m glad @ibab and team took this seriously and made the right choice,” said Midha in a tweet, referring to FLUX.1’s seeming avoidance of this issue (and mentioning the account of xAI lead researcher Igor Babuschkin).

Because of this flub, Google apologized and turned off Gemini’s ability to generate images of people in February. As of today, the company still doesn’t let Gemini generate images of people.

A firehose of misinformation

This general lack of safeguards could cause problems for Musk. The X platform drew criticism when AI-generated deepfake explicit images representing Taylor Swift went viral on the platform. Besides that incident, Grok generates hallucinated headlines that appear to users on X almost weekly.

Just last week, five secretaries of state urged X to stop spreading misinformation about Kamala Harris on X. Earlier this month, Musk reshared a video that used AI to clone Harris’ voice, making it appear as if the vice president admitted to being a “diversity hire.”

Musk seems intent on letting misinformation like this pervade the platform. By allowing users to post Grok’s AI images, which seem to lack any watermarks, directly on the platform, he’s essentially opened a firehose of misinformation pointed at everyone’s X newsfeed.

Dakidarts

Recent Posts

Inspiring Branded Content Examples: A Showcase

Looking for inspiration? Explore these captivating examples of branded content that effectively engage audiences and…

14 hours ago

OpenAI’s o1: A Self-Fact-Checking AI Model

OpenAI's latest AI model, o1, is a significant advancement in AI technology. Equipped with self-fact-checking…

14 hours ago

AI Chatbots: What is New?

AI chatbots have revolutionized communication and customer service. This comprehensive guide explores the technology behind…

15 hours ago

Google’s Monopoly: Will Anything Change?

Google's dominance in the search engine market has raised antitrust concerns. This article explores the…

15 hours ago

Shopsense AI: Shop the VMAs Looks with AI-Powered Dupes

Discover Shopsense AI, a platform that allows music fans to find and purchase fashion dupes…

23 hours ago

Rent Success: Expanding Your Reach Beyond Your Website

Explore the potential of publishing content beyond your website to reach a wider audience and…

23 hours ago