Sett raises US$30M Series B to expand AI automation for game user acquisition
Sett’s funding supports AI agents for research, creative generation, testing, and optimization as mobile game UA teams push for faster iteration cycles.
Sett has raised US$30 million in a Series B round to scale an AI-driven marketing automation platform focused on user acquisition creatives for mobile games.
The funding brings total capital raised to US$57 million, as the company pushes deeper into automating the creative production and optimization loop that performance marketing teams in gaming run continuously.
Short on time?
Here’s a quick look at what’s inside:
- What Sett is building for the UA creative loop
- Where this fits in the performance marketing stack
- Competitive pressure: automation versus networks and DSPs
- What growth teams should evaluate before adopting
What Sett is building for the UA creative loop
Sett’s pitch is an agent-based system that spans multiple steps of the UA workflow: research, ideation, playable and video generation, deployment, and performance analysis. The company emphasizes “continuous learning,” where outputs and campaign feedback inform the next creative cycle.
In mobile gaming UA, the operational bottleneck is rarely buying inventory; it is producing and iterating enough creative variations fast enough to find winners. Sett’s positioning leans into that reality with a system designed to:
- Generate creatives (including playable and video formats).
- Run and monitor campaigns.
- Learn from performance signals over time so future iterations improve.
The company also highlights a common dynamic in entertainment ads: many concepts fail, and the system’s job is to explore quickly rather than “predict” a single best creative upfront.

Where this fits in the performance marketing stack
This category sits at the intersection of creative automation and performance marketing operations. For gaming marketers, the highest value tends to come from compressing iteration cycles: going from idea to live test to insight without weeks of production overhead.
If Sett can reliably reduce cycle time, the practical impact is less about replacing marketers and more about changing team allocation:
- UA strategists spend more time on hypotheses, audience strategy, and creative direction.
- Production becomes a scalable pipeline rather than a manual constraint.
- Learnings become institutional, not trapped in spreadsheets or individual operator memory.
The macro trend underneath is broader “AI marketing automation,” where systems increasingly own parts of execution (generation, testing, optimization) while humans focus on judgment calls and brand or gameplay nuance.
Competitive pressure: automation versus networks and DSPs
Sett operates in a competitive landscape that overlaps with major app growth and ad-tech players, including AppLovin, Liftoff, and Moloco. Those companies are strong in distribution, optimization, and performance infrastructure, and they increasingly bundle tooling that touches creative.
Sett’s differentiation claim is scope across the creative loop and the “agent” framing: not only generating assets, but feeding performance signals back into future ideation and production cycles.
The competitive question is whether gaming advertisers will prefer:
- A specialized creative automation layer that plugs into multiple buying platforms, or
- An integrated stack where creative tooling is bundled with optimization and inventory access.
In practice, adoption often depends on integration friction (connections to ad networks, asset pipelines, attribution tooling) and whether the system improves outcomes beyond what teams can already do with existing creative testing processes.
What growth teams should evaluate before adopting
For performance teams considering this type of platform, evaluation should be framed around operational impact, not feature lists:
- Baselines and lift: What metric is expected to improve (CPI, ROAS, retention-adjusted ROAS), and over what time window?
- Creative governance: How brand and gameplay constraints are enforced across many generated variants.
- Learnings portability: Whether insights are usable across titles, geos, and networks, or only within a narrow setup.
- Workflow fit: How the system plugs into existing creative ops, analytics, and attribution, and who owns what internally.
- Risk management: How experimentation avoids wasted spend, and what controls exist for pausing or throttling underperforming variants.
For marketers, the bigger signal is that creative ops is being treated as an automation problem at scale, with funding flowing to systems that can run faster iteration cycles and preserve learnings across campaigns.


