UK models call out AI image misuse in fashion

The BFMA’s new petition reignites the debate around AI, consent, and creative ownership

UK models call out AI image misuse in fashion

The British Fashion Model Agents Association (BFMA) is drawing a line in the digital sand. In its new campaign, “My face is my own,” the group is demanding stronger protections for models against unauthorized use of their likeness by AI tools.

This article explores what the petition means for marketers, why image rights in fashion are suddenly under fire, and how brands should navigate the rising legal and ethical risks of AI-generated visuals.

More than 2,300 professional models have backed the BFMA’s petition, which calls for written, voluntary consent before any AI-generated use of a model’s face, body, or likeness. The group warns that current UK legislation leaves models and other creative professionals like stylists and photographers wide open to exploitation by companies deploying generative AI tools.

And marketers should be paying attention. The use of synthetic media in brand campaigns isn’t slowing down, but the regulatory and reputational risks are rapidly stacking up.

Short on time?

Here’s a table of contents for quick access:

The future of marketing: AI transformations by 2025
Discover AI marketing’s future in 2025 with predictions on automation, personalization, decision-making, emerging tech, and ethical challenges.

What the BFMA is asking for

The petition centers on one key principle: models should have full control over how their image is used by AI.

According to the petition, no signatories “have granted any permission for their likeness, image, and/or characteristics to be used for any artificial intelligence purposes.” The BFMA insists that express consent must be obtained, ideally with clear licensing terms and a defined scope of use.

Beyond individual protections, the association is calling on the UK government to integrate these rights into forthcoming AI legislation. They’re asking for a standalone framework, similar to new proposals in the EU and US, to regulate non-consensual digital replication and protect against deepfake misuse.

Without such legislation, the BFMA argues, creators are left at the mercy of more powerful commercial players with little to no legal recourse.

In the UK, protections over a person’s image are currently scattered across several domains: data protection laws, advertising codes, performer rights, and even criminal law.

But there’s no single, unified law explicitly preventing unauthorized AI-generated use of someone’s likeness. This patchwork system, according to the BFMA, gives commercial actors significantly more bargaining power than individuals.

By contrast, legislative proposals in the EU and the US are moving toward stricter AI consent frameworks. Some states in the US are already introducing rules around digital replicas and deepfake consent, while the EU’s AI Act includes clauses targeting manipulative and exploitative AI use in media.

This international contrast adds pressure for UK lawmakers to modernize protections, especially in industries where personal image is the product.

New 2025 marketing predictions: privacy, AI and many more
From AI ethics to browser shakeups, this is your Q2 cheat sheet for staying ahead in marketing.

Why this matters for fashion and media marketers

The petition lands at a moment when AI is increasingly embedded into creative workflows, especially in fashion where brands are experimenting with AI-generated models and campaigns to cut costs or speed up production.

Take Valentino and Vans’ recent AI campaign, which was only greenlit because the participating models gave explicit, informed consent for their likeness to be used. That is now becoming the ethical gold standard.

But when brands fail to follow suit, backlash is swift. Guess came under fire in July after a Vogue print ad featured a fully AI-generated model created by Seraphinne Vallora, with no real person involved. Social media lit up with criticism, accusing the brand of replacing real talent and devaluing creative labor.

These incidents are not just PR headaches. They are wake-up calls for marketers using or commissioning AI content. Without proper consent, brands risk alienating audiences, violating ethical standards, and possibly even facing legal action down the line.

Should marketers disclose AI use?
Some marketers keep AI tools a secret. Should you? This article explores the strategic pros and cons of transparency.

What marketers should know

Whether you're in fashion, media, or martech, AI-generated likenesses are officially a reputational risk. Here's how to stay on the right side of both the law and your audience:

1. Get consent in writing, every time

If you're using AI tools that simulate or remix real faces, ensure you have clear, written agreements with talent. This protects not only the individual but also your brand from future disputes.

2. Be transparent with audiences

Whether you're publishing AI-created visuals or using AI in back-end production, disclosure matters. Consumers are increasingly wary of inauthenticity. Labeling AI content honestly builds trust.

3. Reassess your image sourcing workflows

Brands and agencies should audit their current use of AI in content production. Who owns the data behind your visuals? Were image rights cleared before training any model? These questions matter now more than ever.

4. Track regulatory shifts

The AI legal landscape is moving fast. Stay updated on UK government proposals and international legislation. If you're working across borders, prepare to navigate a mix of regulatory environments.

5. Prioritize ethical AI partnerships

Vet your creative studios and tech vendors. Look for partners that follow emerging ethical standards, especially around consent, licensing, and digital labor practices.

AI is transforming fashion and content creation, but it's also challenging the fundamentals of image rights and creative consent. The BFMA’s petition is more than just a protest. It’s a signal that individual creators are ready to push back against unchecked digital replication.

Marketers need to get ahead of this. Waiting for regulation to catch up is risky. By adopting consent-first AI practices now, brands can future-proof their strategies and show up as leaders in ethical innovation.

This article is created by humans with AI assistance, powered by ContentGrow. Ready to explore full-service content solutions starting at $2,000/month? Book a discovery call today.
Book a discovery call (for brands & publishers) - ContentGrow
Thanks for booking a call with ContentGrow. We provide scalable and tailored content creation services for B2B brands and publishers worldwide.Let’s chat a bit about your content needs and see if ContentGrow is the right solution for you!IMPORTANT: To confirm a meeting, we need you to provide your