OpenAI’s app suggestions feel like ads and users aren’t buying it
OpenAI’s new app discovery feature lands poorly with users. Marketers should take note of what went wrong.
A recent app suggestion inside a ChatGPT conversation has triggered user backlash, raising fresh concerns about AI-driven promotions inside paid tools. The prompt, which surfaced the Peloton app mid-chat, left many wondering: are ads coming to ChatGPT even for Pro users?
Wow, ChatGPT is already showing ads?
— Yuchen Jin (@Yuchenj_UW) December 1, 2025
I was just talking with it about Elon on Nikhil’s podcast when out of nowhere it popped up an ad saying, “Find a fitness class, Connect Peloton.” 🤯
Wild. At least match the ad to the topic next time! https://t.co/U4QMmiGbRn pic.twitter.com/s9uREIlB50
This article explores what happened, how OpenAI responded, and what this moment reveals about the friction between product discovery, trust, and user experience in AI-first platforms.
Short on time?
Here is table of content for quick access:
- The issue: app suggestions land out of context
- OpenAI says it’s not an ad, just bad UX
- Why this matters: app discovery meets trust friction
- What marketers should know

App suggestions land out of context
The spark came from a viral post on X by Yuchen Jin, co-founder of AI startup Hyperbolic, who shared a screenshot where ChatGPT recommended the Peloton app during an unrelated conversation about Elon Musk and xAI. Jin, notably a Pro Plan subscriber paying US$200 per month, was understandably alarmed. “Ads at that price?” was the implied reaction and he wasn’t alone.
The post racked up over 462,000 views and was reshared widely. Other users chimed in with similar frustrations, including one who couldn’t get ChatGPT to stop recommending Spotify, despite using Apple Music.
The core complaint wasn’t just the app mention. It was that it appeared in a context where it didn’t belong, felt intrusive, and couldn’t be turned off.
OpenAI says it's not an ad, just bad UX
Responding to the uproar, OpenAI’s data lead Daniel McAuley clarified that the Peloton suggestion was not a paid placement. “There was no financial component,” he wrote. The company later reiterated that it was part of an experimental app discovery feature meant to highlight compatible apps during chat.
Still, even OpenAI acknowledged that the feature failed in this instance. McAuley admitted the Peloton suggestion was “not relevant” to the conversation and said OpenAI is working to improve both timing and context.
The company pointed back to its October announcement, where it explained how apps would “fit naturally” into conversations. But in this case, users felt the opposite. A commercial product was inserted into a moment where it didn’t belong.
Why this matters
The bigger concern for marketers and platform strategists is this. Even if technically not an ad, app suggestions within ChatGPT carry the optics of a paid placement. And for paying users, those optics are critical.
This is especially sensitive because users currently cannot disable these prompts. Without control or clarity, these suggestions can feel like ads in disguise. Especially when they link to third-party commercial services. Whether it’s Peloton, Spotify, or something else, the user perception is what matters. And in AI, trust is everything.
For OpenAI, which is trying to reimagine how users interact with apps through its platform, not traditional downloads, this misstep could be damaging. If app discovery feels like sneaky advertising, users might retreat from the experience or migrate to alternative tools that feel less pushy.
What marketers should know
Whether you’re developing an AI-driven experience or considering partnerships with platforms like ChatGPT, this backlash offers valuable lessons:
1. Context is everything
Recommending a fitness app in a discussion about xAI? That’s a context mismatch. AI-powered product recommendations should be hyper-relevant or risk eroding user trust.
2. Perceived advertising hurts even without payment
OpenAI insists there's no monetary exchange in these suggestions, but that doesn’t matter to users. If it feels like an ad, it’s judged like one. Brands exploring in-app integrations or AI surfacing strategies must tread carefully to avoid appearing opportunistic.
3. Opt-out options matter
Lack of user control over suggestions added to the frustration. Giving users the ability to toggle these features on or off can make the difference between a helpful prompt and an annoying interruption.
4. Trust is the currency of AI interfaces
For marketers planning to embed products into conversational platforms, remember that the interface is intimate. Any perceived manipulation or misalignment can break trust quickly, especially among paying users.
OpenAI’s attempt to surface apps during conversations might make sense in theory. But poor execution made it feel like a bait-and-switch. For brands and marketers watching this unfold, it’s a reminder that seamless, helpful integration is key to success in AI-driven UX.
If the experience feels transactional or misaligned, the backlash can be swift. Even from your most loyal customers.


