TUTORIALS

Keller Maloney
Unusual - Founder
Dec 4, 2025
When someone asks ChatGPT for a software recommendation, they're not conducting a search—they're having a conversation. The model doesn't return a ranked list; it makes a judgment call. And like any judgment, it happens in stages.
Most brands think AI visibility is a single problem: show up more often in AI responses. But recommendation actually requires clearing two sequential hurdles, each demanding its own strategy. The first is visibility—getting into the conversation at all. The second is specificity—being the recommendation when constraints tighten.
Understanding which hurdle is blocking you changes everything about how you should approach AI marketing.
The Visibility Hurdle: Can the model find you?
Visibility is the simpler problem to diagnose but not always the easier one to solve.
A model can't recommend what it doesn't know exists. When someone asks a broad question—"What are the best options for project management software?"—the model pulls from what it learned during training and, increasingly, from what it finds in real-time search. If you're absent from both, you're invisible.
Startups struggle with this hurdle most acutely. They're too new to appear in the model's training data. They lack the third-party presence models consult when building landscape awareness: comparison sites, G2 listings, Reddit threads, industry roundups, niche directories. Without that footprint, even a perfect product gets left out of the initial response.
Enterprises typically clear the visibility hurdle by default. Decades of market presence mean they're mentioned everywhere. But that creates a different trap: visibility without clarity. Models know you exist but form vague or outdated opinions about who you're for and when you're the right choice.
The strategy for overcoming the visibility hurdle isn't about optimizing your own website—it's about earning presence in the places models look when they're trying to map a category. That means focusing on what we call "long-tail earned media": scattered mentions across diverse third-party sources rather than concentrated effort on a single channel.
The Specificity Hurdle: Can the model explain why to choose you?
Clearing visibility gets you mentioned. Clearing specificity gets you recommended.
Here's what happens in the middle of an AI conversation: constraints accumulate. The buyer starts broad, then narrows. "Which project management tool integrates natively with Microsoft 365?" Then narrows again. "Okay, which one is friendliest for non-technical teams?"
By the third or fourth turn, the model isn't just retrieving—it's reasoning. It's weighing tradeoffs, matching capabilities to constraints, and deciding which option best fits this specific buyer's context.
That's where specificity matters. Models form latent opinions about every brand: "Great for startups but not enterprise-ready." "Native Microsoft fit." "Premium service justified by outcomes." "Excellent for regulated industries but heavier to implement."
These opinions determine recommendations. If the model thinks you're "overkill for small teams," it won't suggest you when the buyer mentions they're a 10-person startup—even if you'd actually be a great fit. If it thinks you're a "toy" that lacks compliance features, it won't recommend you for regulated industries—even if you have SOC 2 certification.
Parley, one of our first clients, illustrates this perfectly. They build AI tools for immigration lawyers. When buyers asked ChatGPT about visa-application drafting tools, the model would sketch the landscape and name familiar products. Parley wasn't in that first sweep—a classic visibility problem.
But then the buyer would add a constraint: they wanted to draft directly inside Microsoft Word. On that follow-up, the model would search the web, find Parley's page that explicitly detailed their Word add-in, and pivot. It recommended Parley and explained, in plain terms, how it fits a legal team's day-to-day workflow.
Visibility got Parley into the conversation. Specificity—precise, verifiable details about their Word-native integration—got them the recommendation.
Why Both Hurdles Require Different Strategies
You can't solve both problems the same way because models use different information at each stage.
For visibility, models rely heavily on third-party sources. When mapping a category, they consult comparison articles, directories, community discussions, and aggregator sites. Your strategy here is external: earn mentions in the places models look when building landscape awareness.
For specificity, models need details they can parse and reuse—concrete claims about who you're for, what you integrate with, when you're the right choice vs. the wrong choice. Your strategy here is about creating owned content that makes explanation easy: fit narratives, integration specs, head-to-head comparisons with honest tradeoffs, implementation timelines, pricing logic.
The mistake most teams make is treating visibility and specificity as the same problem. They create generic content that tries to serve both jobs and ends up solving neither. Or they focus obsessively on one hurdle while the other is actually what's blocking them.
Most AI visibility tools make this mistake structurally. They measure prompt share and mention frequency—useful for diagnosing the visibility hurdle but silent on whether models understand when to recommend you. You can have 80% visibility in your category and still lose recommendations if models think you're "legacy" or "overkill" or "great for enterprises but not SMBs."
The Sequential Nature of the Problem
You can't skip visibility to work on specificity. If models don't know you exist, they'll never find your detailed fit narratives or integration guides. But visibility without specificity leaves you perpetually in the "also mentioned" category—recognized but never confidently recommended.
This is why the work changes based on where you're stuck.
If you're failing the visibility hurdle:
Focus outward. Earn presence in third-party comparison articles, directories, and community discussions where models build category awareness.
Prioritize breadth over depth. One mention each in fifty different credible sources beats fifty mentions in one source.
Think long-tail media, not long-tail keywords. Models rarely cite landing pages but do consult external sources when making recommendations.
If you're failing the specificity hurdle:
Focus inward. Create detailed, model-friendly content on owned properties where you control the narrative.
Prioritize precision over volume. One page that clearly explains who you're for and why beats ten vague pages about "best-in-class solutions."
Make tradeoffs explicit. Models reward honest fit narratives: "Choose us if X, choose a competitor if Y."
Enterprises, already visible, typically need specificity work. Startups need visibility first, then specificity once they're routinely mentioned.
Measuring the Right Things
Traditional AI visibility metrics—prompt share, mention frequency, rank position—only measure the first hurdle. If you're stuck on specificity, optimizing these numbers won't help. You need different metrics.
To measure specificity, track recommendation share in constrained scenarios that reflect real buying contexts. Don't just ask "What are project management tools?"—that measures visibility. Ask "What project management tool integrates natively with Microsoft 365 and is friendly for non-technical teams?"—that measures whether models understand your specific fit.
Track opinion deltas: how do models describe your positioning? Are they saying you're "enterprise-only" when you want to be seen as accessible to mid-market teams? Are they calling you "budget-friendly" when you want to be seen as premium?
Monitor referral attribution. Which mentions actually convert to pipeline? A vague mention in a broad category response might inflate your visibility score but generate zero interest. A specific recommendation in a constrained scenario—even if less frequent—might drive significant pipeline.
The Work Depends on the Diagnosis
Most brands get stuck optimizing for the wrong hurdle. They pour effort into prompt share when their real problem is vague positioning. Or they obsess over detailed fit narratives that models never encounter because they lack basic category presence.
The goal isn't to be mentioned. It's to be recommended confidently when a buyer's constraints match your strengths. That requires clearing both hurdles, in sequence, with strategies matched to each.
Figure out which one is blocking you. Then do the work that actually matters.
Careers
Ideas
Legal


