Warner Bros. sues Midjourney over AI-generated superheroes and cartoon icons

Warner Bros. joins Disney and NBCU in challenging AI’s use of iconic characters. Here’s why marketers should care.

Warner Bros. sues Midjourney over AI-generated superheroes and cartoon icons

Another major studio has stepped into the legal ring against generative AI. Warner Bros. Discovery (WBD) has filed a federal lawsuit in Los Angeles against Midjourney, accusing the AI image platform of rampant copyright infringement involving its IP portfolio, including Superman, Batman, Wonder Woman, and characters from Looney Tunes, Scooby-Doo, and Rick and Morty.

The move follows a similar legal offensive from Disney and Comcast's NBCUniversal just months earlier. As the legal heat rises, the implications for brands experimenting with generative AI content are becoming harder to ignore.

This article explores what’s at stake in this high-profile lawsuit, why major media companies are taking aim at Midjourney, and what this wave of legal action could mean for marketers, content creators, and anyone using AI for brand storytelling.

Short on time?

Here is a table of content for quick access:

The future of marketing: AI transformations by 2025
Discover AI marketing’s future in 2025 with predictions on automation, personalization, decision-making, emerging tech, and ethical challenges.

According to WBD’s 87-page complaint, Midjourney “brazenly dispenses Warner Bros. Discovery's intellectual property as if it were its own,” alleging that the platform was trained on unauthorized copies of copyrighted content and allows users to generate derivative works without consent.

The complaint cited multiple examples of AI-generated visuals that mimic WBD-owned characters and styles. It claims Midjourney knowingly and systematically enables infringement, especially after the company briefly blocked outputs involving WBD characters, only to lift those restrictions later and frame it as a feature upgrade.

WBD is seeking unspecified financial compensation and a permanent injunction to stop Midjourney from further using its content. The company is also asking for statutory damages of up to US$150,000 per infringed work.

This isn’t an isolated fight. The Warner Bros. lawsuit closely follows Disney and NBCUniversal’s joint complaint against Midjourney in June, which named a sprawling list of infringed characters, including Darth Vader and Baby Yoda, Ariel, Spider-Man, and Shrek.

In both cases, the plaintiffs argue that Midjourney profited from models trained on unlicensed creative work, bypassing the massive investment that studios pour into building their intellectual property. Disney and NBCU’s complaint bluntly labeled Midjourney a “copyright free-rider” and “a bottomless pit of plagiarism.”

Together, these lawsuits reflect rising legal consensus: it doesn’t matter whether infringement happens with a camera or an algorithm. Unauthorized copying is still piracy.

How to spot AI-generated text: with or without tools
Find out how to recognize content created by AI, even if you don’t have specialized software.

What marketers should know

Whether or not you're using Midjourney specifically, this legal clash has broader implications for brand marketers and creative professionals who rely on generative AI.

Here’s what to watch:

1. IP risk is now a content risk

Marketers often lean on AI tools for rapid content creation, but using them to reference pop culture or recognizable characters could now open the door to legal trouble. Even implicit resemblance might trigger scrutiny, especially for client-facing campaigns.

Action step: Audit your use of AI-generated content, especially if you work in entertainment, media, or anything involving pop culture themes.

2. Expect AI content guardrails to tighten

Midjourney briefly blocked WBD character outputs before reversing that decision. That kind of moderation whiplash is unsustainable. As lawsuits escalate, AI platforms will likely face pressure to implement stricter content filters, and some may err on the side of over-blocking to stay safe.

Action step: Stay flexible with your content workflows. AI tools may change what they allow, or don’t, overnight.

3. Don’t assume “open” datasets are safe to use

Generative AI tools often claim they’re built on public datasets. But these lawsuits argue that even scraping web content for training models doesn’t make it fair use. This could reshape the legal boundaries for how creative assets are reused in AI pipelines.

Action step: When using AI-generated visuals or audio, ask your vendors how their models were trained. Transparency will matter more in the months ahead.

As Midjourney faces mounting lawsuits from some of the world’s biggest IP holders, marketers need to tread carefully. AI creativity is still exciting, but this legal battle underscores the risks of relying too heavily on tools that sidestep ownership boundaries.

The future of brand-safe AI content may come down to better tools, clearer licensing models, and smarter human oversight. Until then, the smart play is to assume that not all AI content is legally safe to use.

This article is created by humans with AI assistance, powered by ContentGrow. Ready to explore full-service content solutions starting at $2,000/month? Book a discovery call today.
Book a discovery call (for brands & publishers) - ContentGrow
Thanks for booking a call with ContentGrow. We provide scalable and tailored content creation services for B2B brands and publishers worldwide.Let’s chat a bit about your content needs and see if ContentGrow is the right solution for you!IMPORTANT: To confirm a meeting, we need you to provide your