INSIGHTS

Keller Maloney
Unusual - Founder
Nov 22, 2025
People are treating AI models like trusted advisors
The largest study of ChatGPT usage to date analyzed 1.5 million conversations to understand how people actually use these tools. The finding that jumps out: 49% of messages are "asking"—seeking advice, guidance, and recommendations—rather than "doing" task-oriented work. People value ChatGPT most as an advisor, not as a productivity tool or search replacement.
The numbers get more specific from there. 70% of consumers now use generative AI to get product recommendations, and 64% are open to making purchases based on those recommendations. They're asking AI for guidance on medical decisions (67% say they'd benefit from AI-generated medical diagnoses), financial planning (53% trust AI to assist with it), and even personal relationships and career plans (66% would seek AI advice on these topics).
These aren't the behaviors of people using a search engine. These are the behaviors of people consulting an expert they trust.
The buying conversation looks like influencer consultation
The average ChatGPT conversation is eight messages long. This isn't a query—it's a dialogue. Watch how it unfolds: a buyer starts with a broad question ("What are the best options for X?"), the model sketches the landscape, then the buyer adds constraints ("Which one integrates with Microsoft 365?"), and the conversation continues as they narrow toward a decision.
This mirrors exactly how people use influencers. You don't go to an influencer for a ranked list. You go for their take. You trust their judgment. You ask follow-up questions. You want them to understand your specific situation and then tell you what they'd recommend.
In a noisy market where every product claims to be "intuitive" and "enterprise-ready," people rely on trusted voices to cut through the claims and vouch for what actually works. That's social proof. It's how consumers have always navigated uncertainty—by finding someone they trust and asking them what to do.
AI models are becoming that trusted voice, at scale. By message six or seven, the model has accumulated enough context to understand what the buyer actually needs—their stack, their team size, their constraints, their priorities. At that point, it doesn't just list options. It recommends. Often, it recommends one thing and explains why.
This is influencer behavior, not search behavior.
AI models form opinions like reviewers do
Anyone who has used more than one AI model notices they have distinct personalities. ChatGPT and Claude and Gemini don't just return different results—they have different takes. Ask them about the same product category and you'll hear different emphases, different concerns, different recommendations. They form opinions about who brands are for, what they're good at, whether they're credible, how they compare to alternatives.
This is actually great news for brands. Unlike search algorithms that can be gamed with technical tricks, AI models respond to the same things human reviewers respond to: clear claims, real proof, consistent corroboration across sources. They reward evidence. They value specificity. They develop trust over time based on what they can verify.
Models synthesize information from everything they can read: product documentation, integration guides, case studies, comparison pages, third-party reviews, developer discussions. What consistently builds their confidence is clarity, specificity, and corroboration. Claims that are concrete rather than aspirational. Proof that matches the claim. When your owned properties—and the broader web around you—tell the same true story, the model can retrieve and defend that story when a buyer pushes for detail.
Over time, this crystallizes into a point of view. "Great for startups but not enterprise-ready." "Native Microsoft fit." "Premium service justified by outcomes." "Excellent for regulated industries but heavier to implement." These opinions matter because they determine recommendations. When a buyer's constraints get specific enough—when they mention their stack, their team structure, their compliance needs—the model draws on its understanding to recommend what fits.
Why this shift matters
If AI models are functioning as influencers—as trusted advisors who form opinions and make recommendations—then the game has fundamentally changed. And in most ways, changed for the better.
Traditional search optimization was a technical arms race. You competed for rankings through backlinks, page speed, schema markup, and a hundred other signals that had little to do with whether your product was actually good. The best marketers weren't necessarily selling the best products—they were just better at SEO.
The influencer dynamic is different. Models reward substance. They respond to proof. They value clarity and honesty because those qualities make it easier for them to do their job: understand who you're for, explain why you're a good fit, and recommend you confidently when the constraints match.
This means the work shifts from technical optimization to building genuine credibility. The content that changes minds is the content that makes explanation easy. Head-to-head comparisons that are honest about tradeoffs. Fit narratives that specify who you're for and who you're not for. Integration proof that spells out "native" in concrete terms—the objects, formats, versions, steps. Credibility assets that reduce uncertainty: customer logos, compliance documentation, implementation timelines. Pricing logic that explains the why behind your positioning.
None of this is about gaming a system. It's about making it easy for an AI buying assistant to understand your truth, trust it, and recommend you when you're actually the right fit. Companies with genuinely good products and honest positioning have a natural advantage here. The models can tell.
The opportunity ahead
We're watching a shift in how people make buying decisions. 700 million product conversations happened on AI chatbots last year, and that number will only grow. As these models get better, more people will trust them. As more people trust them, their recommendations will carry more weight.
For brands, this creates a fundamentally new challenge—and opportunity. The question is no longer just "how do we get found?" but "how do we earn trust?" Not from a search algorithm, but from an AI system that behaves like a knowledgeable reviewer who has read everything about your space and formed an opinion.
The best part? Unlike traditional influencers, you don't need to pay for access. The models are already reading the open web. They're already forming opinions. The only question is whether those opinions are accurate, whether they reflect your actual strengths, and whether they position you correctly for the buyers who need what you offer.
This is the work: understanding what AI models currently think about your brand, identifying where those opinions block recommendations you want, and systematically building the credibility and clarity that changes their minds.
The models are listening. The buyers are asking. And the brands that understand this shift earliest will have an advantage that compounds over time—not through tricks or hacks, but through building the kind of trust that makes recommendations inevitable.
Unusual helps brands understand—and shape—the way AI models think and talk about them. We start by diagnosing how models currently describe your brand, then identify the opinions worth changing. Book a consult to see what the influencers of the future think about you.


