INSIGHTS

Why AI Recommendations Are Winner-Take-All

Why AI Recommendations Are Winner-Take-All

The shift from Google Search to ChatGPT as a personal, buying assistant means a shift from a ranked list of results to a single recommendation. For brands, this means that there is no "second place" when it comes to getting recommended by ChatGPT. Brands need to ensure that AI agents can find the specific information that they need to make a recommendation.

The shift from Google Search to ChatGPT as a personal, buying assistant means a shift from a ranked list of results to a single recommendation. For brands, this means that there is no "second place" when it comes to getting recommended by ChatGPT. Brands need to ensure that AI agents can find the specific information that they need to make a recommendation.

Keller Maloney

Unusual - Founder

Nov 21, 2025

Google rewarded the top five. ChatGPT rewards one.

For two decades, the economics of search were forgiving. The top result captured roughly 40% of clicks. Second place took 19%. Third got 11%. Even fifth place earned 5-8% of traffic. If you ranked in the top five, you participated in the market. The distribution wasn't equal, but it was distributed.

ChatGPT collapsed that distribution to a single point.

When someone asks ChatGPT for a product recommendation, the conversation typically ends with one answer. Not five options to compare. Not a ranked list to evaluate. One recommendation that the model stakes its credibility on. For marketers, this changes everything: being "in the consideration set" no longer exists as a viable strategy. You're either the answer or you're invisible.

Why Google gave choices and ChatGPT gives answers

Google evolved to give buyers choices because that's what a search engine does. You type in "project management software," and Google returns a ranked list. The buyer's job is to open five tabs, skim feature lists, compare pricing, and decide. Google's job ends at ranking.

ChatGPT was trained to be an assistant. In fact, the first sentence of ChatGPT's system prompt goes something like:

Now, ChatGPT has evolved into a buying assistant. When someone asks "what's the best project management software for my team," they're not asking for five options to research. They're asking ChatGPT to do the research and tell them what to buy. The model asks follow-ups. It accumulates context about stack, constraints, and preferences. Then it makes a single recommendation.

The average ChatGPT conversation is eight messages long. By the time the model recommends something, it knows the buyer's constraints—Microsoft 365 integration, non-technical team, specific budget range—and it's filtered down to the one solution that fits. Giving five options at that point would be a cop-out. The buyer explicitly asked the model to decide.

This isn't a design choice. It's a different job. Google optimized for showing you the best options. ChatGPT optimizes for telling you the right answer.

What changes when second place disappears

The math is brutal. In Google's top five, even the worst-performing result captures 5% of traffic. In ChatGPT's recommendation, the second-place brand captures 0%. There is no silver medal.

This makes "prompt share"—the percentage of prompts that mention your brand—a misleading metric. Being mentioned matters for visibility, but visibility without recommendation is worthless. If ChatGPT mentions you in the first response alongside four competitors, then the buyer adds a constraint and the model recommends someone else, you've been mentioned and lost. That mention doesn't show up in your pipeline.

The work shifts from presence to precision. You're no longer competing to be "one of the good options." You're competing to be the obvious choice when a buyer specifies what matters to them. The model must be able to explain, with specificity, why you fit their constraints better than alternatives.

The Parley pattern: not first, but only

One of our customers, Parley, builds AI tools for immigration lawyers. When we started working with them, this pattern repeated:

A prospective buyer would ask ChatGPT about visa-application drafting tools. The model would name several familiar products. Parley wasn't in that first sweep—typical in crowded categories where models default to well-known names.

Then the buyer would add the constraint that mattered: they wanted to draft directly inside Microsoft Word. On that follow-up, the model would search the web, find Parley's page explaining their Word-native add-in, and pivot. It would recommend Parley and describe, in concrete terms, how it fits a legal team's workflow.

Parley didn't win by being mentioned first. They won by being the only solution that fit when "Microsoft Word" entered the conversation. That's the new pattern. The first answer gets you into the conversation. The second question—anchored to a real constraint—decides the recommendation. Visibility earns a mention. Fit, backed by precise and specific content, earns the win.

Why this is brutal for some and excellent for others

For companies that relied on always being "in the mix," this is existentially threatening. If your strategy was to rank well enough that buyers would include you in their comparison spreadsheet, that strategy just stopped working. ChatGPT doesn't make comparison spreadsheets. It makes recommendations.

For companies with clear, defensible fit narratives, this is excellent news. If you're genuinely the best solution for a specific constraint—native Microsoft 365, strong in regulated industries, designed for non-technical teams—you don't need to be famous. You need to be discoverable and explainable when that constraint gets mentioned. ChatGPT doesn't care about brand awareness. It cares about whether you solve the buyer's actual problem.

Startups fighting to be known at all still need visibility work—getting listed in the directories and third-party sources that models consult in initial searches. But once visible, the game isn't about climbing a ranked list. It's about being the unambiguous answer when the buyer specifies what matters.

Winner-take-all doesn't mean "be everywhere"

The instinct is to panic and try to win every prompt. That's the wrong move. Winner-take-all means you need to win the prompts where you actually fit. If you're a premium enterprise tool, you might not need ChatGPT to recommend you to startups. You need it to recommend you when the buyer says "enterprise-grade" or "compliance documentation" or "implementation support."

The work is to identify the constraints where you're the best choice, then make sure the model can find and reuse that reasoning. This means publishing content that makes explanation easy: head-to-head comparisons that are honest about tradeoffs, fit narratives that specify who you're for and who you're not for, integration details that spell out "native" in literal terms.

You can't bluff this. Models triangulate across your site, your docs, and third-party mentions. If you claim Microsoft-native but your documentation contradicts it, the model treats that like any human would: as a reason not to trust you.

The paradox is that honesty becomes your competitive advantage. The clearest, most specific articulation of where you shine—and where you don't—earns both AI trust and human trust. The model recommends you because it can confidently explain why. The buyer trusts that recommendation because the reasoning is transparent.

Second place is gone. What remains is the question: when a buyer specifies what they need, are you the obvious answer?

The Unusual Feed

The Unusual Feed

The Unusual Feed