Marketing has a new audience.

It's not Gen Z. The new audience is AI itself—and if you want to reach the hundreds of millions of people who now start their buying decisions by asking ChatGPT, you need to learn how to persuade the model first.

The numbers are significant: ChatGPT processes 700 million weekly active users, while Gemini reached 450 million monthly users and Perplexity serves 22 million. More importantly, 59% of Americans now use generative AI tools for online shopping, and 38% have already used AI for purchase decisions. When someone asks "What's the best project management tool for a remote team?"—they're increasingly asking an AI assistant rather than searching Google. That assistant forms an opinion about your brand, weighs your claims against competitors', and makes a recommendation.

The model has become an influential intermediary. And it operates differently than any channel you've optimized for before.

Why AI differs from traditional audiences

AI models hold the attention of hundreds of millions daily and serve as advisors for consequential decisions. They accumulate context about users' needs better than most salespeople could in a first call.

What makes them different is their emerging orientation toward verification. Models today aren't perfect fact-checkers—ChatGPT's hallucination rate sits at 28.6% for certain tasks, and early research showed they could be manipulated through strategic content placement. But the trajectory is clear. Models increasingly cross-reference claims, and platforms like Perplexity now ground every response in cited sources, while Claude's Extended Thinking mode exposes the reasoning process itself.

Think of this evolution like the internet's credibility journey. In 1995, "don't trust anything you read online" was reasonable advice. Today, every serious researcher starts with internet sources because verification mechanisms matured. AI is following a similar arc—but faster.

This creates an opportunity that compounds over time. As models improve at verification, brands with substantive claims and corroborating evidence will increasingly win recommendations. This isn't about gaming a system. It's about building the kind of legible credibility that both humans and AI can verify.

The advantage belongs to brands willing to be specific. Clear claims. Honest tradeoffs. Proof that matches what you say. When you oversell an integration and your documentation contradicts it, models may miss the discrepancy today—but they're getting better at catching it. And once they do, the contradiction becomes part of their evaluation.

This resembles the marketing shift of the 1930s, when economic pressure forced brands toward feature-based advertising and verifiable claims. Radio created a new medium; brands had to learn its rules. AI creates a new discovery channel with its own logic.

The diagnostic advantage: AI shows its work

Many AI models now expose their research and reasoning process—a development that changes how marketers can approach optimization.

When someone asks ChatGPT about project management tools, you can watch the model search the web in real-time. You can see which sources it retrieves. Perplexity provides numbered citations for every claim. Claude's Extended Thinking mode reveals the reasoning chain before the final answer. Gemini's Search Grounding shows which web pages informed its response.

This visibility creates a diagnostic opportunity that didn't exist in traditional marketing. You can observe why a model recommended a competitor. You can see which documentation it retrieved, which claims it prioritized, and which information it couldn't find.

If the model says "Based on the documentation I found, Asana offers native Slack integration while your product requires a third-party connector," you learn something actionable. Either that's wrong and you need clearer documentation, or it's right and you know which capability gap costs you recommendations.

Traditional marketing operates on lagging indicators—conversions, closed-won interviews, retention cohorts. AI models offer something closer to concurrent feedback on how your positioning lands when evaluated against alternatives.

Your new job: understanding the model's psychology

The work shifts from campaign execution to systematic diagnosis. You need to understand how models perceive your brand across scenarios that matter for your business.

How do they describe you when someone asks about enterprise tools versus startup tools? What happens when the buyer adds constraints about Microsoft integrations? Do they position you as premium or budget? Simple or powerful? Credible or emerging?

You can test these questions systematically. The process looks more like user research than traditional marketing—except your "user" is the AI model, and you're mapping its opinion structure: where it's accurate, where it's outdated, where specific corrections would unlock more recommendations.

Research from Princeton and colleagues demonstrated that targeted content optimization can improve visibility in AI responses by up to 40%. The most effective approaches varied by domain: citation-heavy content worked best for some categories, while statistical evidence proved more persuasive for others. The key finding: models respond to different types of proof depending on query context.

Once you map where the model's view diverges from reality, the work becomes precise. Pick the two or three opinions that most constrain growth. Then build the specific, verifiable content the model needs to shift those opinions.

If you need the model to understand you're enterprise-ready, provide implementation timelines with recognizable logos. If your Microsoft-native workflow is a differentiator, document the specific objects, formats, and versions that make "native" accurate. If pricing is a barrier, explain the logic with concrete ROI examples.

The model will encounter this content—either during training or through retrieval during queries. It will evaluate consistency against other sources. And if your claims hold up under triangulation, the model updates its understanding. Then when a buyer asks their next question with their specific constraints, you're more likely to be recommended with a clear explanation of fit.

The compounding advantage of accuracy

Models aren't perfect at catching inconsistencies today, but they're improving. Research on LLM fact-checking capabilities shows current limitations—hallucination rates remain significant—but also demonstrates that retrieval-augmented systems perform substantially better when claims can be cross-referenced against multiple sources.

This creates a practical constraint: brands that maintain consistency between marketing claims, product documentation, and third-party mentions build more reliable signals for models to retrieve. When your marketing site says one thing and your documentation says another, you introduce ambiguity that becomes problematic as verification improves.

The strategic implication is straightforward. Brands that win in AI-driven discovery will be those willing to be precise about their positioning: who they're for and who they're not for, what tradeoffs exist in their approach, and which claims they can substantiate with specifics.

This isn't purely ethical reasoning—though it happens to align with building trust with human buyers too. It's practical. Models that cross-reference sources increasingly surface contradictions. The same content that makes AI recommendations more confident makes sales conversations more efficient, because buyers arrive already understanding your fit.

What this means for how teams work

Most marketing organizations are structured around campaign execution, creative production, and channel management. AI marketing introduces a different capability requirement: systematic diagnosis, opinion mapping, and evidence architecture.

The work requires understanding what models currently believe about you across relevant scenarios. Then deciding which beliefs most constrain growth. Then creating the substantiation models need to update those beliefs. And finally, measuring whether the change occurred—not through web traffic or demo requests, but through recommendation patterns in realistic conversations.

This combines elements of brand strategy, technical writing, and structured content. You're measuring persuasion through behavior change over time, similar to how user research tracks evolving perceptions.

Case studies from early adopters show measurable results. Tally, a form-building platform, reported that 25% of new users came from ChatGPT discoveries after optimizing their content for AI retrieval. Docebo found that 13% of high-intent leads originated from AI-driven research. A financial services firm increased their mention rate from 12% to 73% across priority topics within three months.

The pattern across these examples: specific, well-documented positioning that models could confidently cite when asked relevant questions.

Keller Maloney

Unusual - Founder

Share