The two requirements for AI recommendations

For any business to be recommended by ChatGPT or Claude, two things must be true:

  1. The model must know you exist. It needs to recall your name or find you through search when assembling its initial list of options.

  2. The model must believe you're the best solution for this specific buyer. It needs evidence that you can solve the user's particular problem better than the alternatives.

If you're building your startup correctly—solving a specific problem for a specific customer better than anyone else—you probably have an advantage on the second requirement. You're focused where the big players aren't. You've built your entire product around one niche, one workflow, one constraint that matters. When the question gets specific enough to match your niche, you should win.

The problem is the first requirement. You can't rely on the model recalling what you do off the top of its head. Most startups aren't in the model's training data in any meaningful way, and even if they are, the model won't spontaneously think of you when someone asks a broad question like "What's the best CRM?" or "What legal tech should I use?"

This is where enterprises and startups diverge. Enterprises are known by default—Salesforce, Microsoft, Oracle are always in the conversation—so their challenge is fit clarity (we've written about that elsewhere). Startups face the opposite problem: discovery, not differentiation.

And discovery in the age of AI depends on two mechanisms: the model's recall and the model's search tools.

How models actually make recommendations

When someone asks an AI assistant for product recommendations, the process unfolds in stages:

Stage 1: Build the shortlist

The model starts by assembling a list of plausible options. It does this through a combination of:

  • Recall: Products it knows about from training data

  • Generic searches: "best CRM software 2025," "top legal tech tools"

  • Directory scanning: G2, Capterra, Reddit, Quora, industry-specific aggregators

From this, it forms a mental shortlist—usually 5-10 products that seem relevant to the category.

Stage 2: Narrow based on constraints

Now it gets specific. If the model already knows the user's constraints—budget, team size, integrations, workflow requirements—it will immediately search for those details and narrow the list proactively. If it doesn't know the constraints yet, it presents the broad shortlist and asks follow-up questions.

Either way, once constraints are clear, the model searches again. This time, the searches are highly specific: "CRM native Gmail integration," "immigration legal tech Microsoft Word add-in," "project management Jira two-way sync." It's looking for concrete evidence that a product supports the user's specific constraint.

Stage 3: Recommend with confidence

If the model finds clear, credible evidence that you solve the specific problem—detailed documentation, integration guides, case studies, third-party validation—you move from the broad shortlist to a confident recommendation. If it doesn't find that evidence, it defaults to the safer, bigger-name option from Stage 1.

The decisive moment isn't whether you're mentioned. It's whether you're in the shortlist when the conversation begins. Once you're in that initial list, your niche focus becomes an advantage. But if you're invisible in Stage 1, you'll never make it to Stage 2 where specificity matters.

The exception: Getting caught on a specific search

There's one scenario where you can skip the shortlist and still get recommended—when the model's constraint-based search finds you directly.

Here's a real example. Parley is a legal-tech startup that builds AI tools for immigration lawyers. One of their first AI-driven customers started with a broad question:

"What are the best tools for visa application drafting?"

ChatGPT assembled its shortlist—familiar legal tech suites, well-known immigration platforms. Parley wasn't in that initial list. Then the buyer added their constraint:

"I need something that drafts directly inside Microsoft Word."

The model searched specifically for "immigration drafting Microsoft Word" and found Parley's AI-optimized page. This page spelled out the Word integration in concrete terms: the add-in architecture, the objects it manipulates, how it fits into a legal workflow. The model pivoted and recommended Parley.

This works, but it's rare. It depends on having exceptionally clear, detailed documentation about your specific niche that's structured in a way models can find and parse. It also depends on the buyer articulating a constraint specific enough to trigger that targeted search. Most of the time, if you're not in the initial shortlist, you won't get a second chance.

This is where a brand wiki becomes critical—a clear, comprehensive articulation of what you do, who you're for, and how you work. It's your backup plan for when a specific search might find you. But don't rely on it as your primary strategy. The reliable path is getting into the shortlist in Stage 1.

Long-tail earned media: The startup visibility strategy

So how do you get into that initial shortlist? You need to appear in the places models are searching during Stage 1—the directories, listicles, and aggregators they consult when building their mental map of a category.

This is where startups should borrow a page from classic SEO strategy, but apply it differently. In traditional SEO, "long-tail" meant targeting hundreds of niche keyword variations. The insight was that it's easier to rank for many specific queries than one competitive head term.

The AI equivalent is long-tail earned media. Instead of targeting niche keywords, target niche sources.

The model isn't just reading the New York Times Wirecutter roundup (though that helps). It's also searching industry blogs, vertical-specific directories, community forums, and small publications you've never heard of. For any given category, there are dozens—sometimes hundreds—of these sources.

Consider Sunset, one of Unusual's customers in the deathcare software space. During our research, we discovered that ChatGPT routinely finds and cites articles from FuneralVision.com—a news outlet for funeral directors. This isn't a mainstream publication. But when the model searches "funeral home software," it appears in the results. A mention there puts you in the shortlist for anyone asking about funeral home technology.

This pattern repeats across industries. There are PropTech blogs that models cite when people ask about real estate software. DevOps newsletters that come up for infrastructure tools. Legal tech roundups on niche industry sites. Healthcare IT forums. Logistics and supply chain aggregators.

The strategy: get mentioned in as many of these long-tail sources as possible.

Some are harder to access than others. A New York Times feature is difficult. But many of these sources are:

  • Free directories where you can self-list (G2, Capterra, Product Hunt, category-specific databases)

  • Community forums where authentic participation leads to organic mentions (Reddit, Quora, industry Slack groups, niche forums)

  • Small industry publications that are hungry for expert sources and product news

  • Vertical-specific roundups ("Best Tools for X") where reaching out to the author can get you included

The goal isn't traffic from these sources. It's presence. When the model does its initial search—"best tools for immigration lawyers," "funeral home software options," "PropTech for commercial real estate"—you want to appear in enough of the results that you make the shortlist.

This is more efficient than the long-tail keyword approach. Instead of creating hundreds of landing pages (which models rarely cite anyway—they strongly prefer long-form content), you get mentioned in hundreds of independent sources. One listing on a relevant directory. One comparison article from a niche blog. One Reddit comment from a real user. These scattered signals accumulate and get you into the conversation.

The other critical piece: Clear documentation

Getting into the shortlist solves the discovery problem. But you also need to win on specificity when the model narrows its search in Stage 2.

From our data at Unusual, AI models rarely cite landing pages when making recommendations. They overwhelmingly prefer long-form content: detailed documentation, integration guides, comparison pages, use case explanations. They want substance, not marketing copy.

This is the second place where startups often fail—not because they lack substance, but because that substance isn't documented in a way models can find and use.

What you need:

Literal, specific documentation. If your competitive edge is "native Slack integration," spell out what "native" means. Not "works seamlessly with Slack" but "uses Slack's Block Kit UI, supports OAuth, syncs channels bidirectionally, updates in real-time via webhooks." The model needs facts it can repeat with confidence.

Use case and workflow content. Write the pages the model is searching for. "Why real estate teams use [YourProduct]" with specific workflow benefits: MLS integration, commission tracking, open house scheduling. Make it easy for the model to match you to the right buyer.

Technical proof for hyper-specific queries. Models often search with remarkably specific terms. We routinely see searches like "Azure Health Data Services DICOM service REST API STOW-RS WADO-RS QIDO-RS documentation." The average ChatGPT search query is six words long. If you have technical capabilities or integrations, document them thoroughly. These ultra-specific searches are where startups win.

Customer evidence and validation. Case studies, logos, testimonials that reference specific use cases. Third-party blog posts about using your product for particular workflows. Models triangulate across sources. The more consistent the signal, the more confidently they recommend you.

You don't need a massive content library. You need clarity about your niche. If you've built your product correctly, you already know what makes you different and who you're for. Document that clearly, and when the model searches for evidence that you can solve a specific constraint, it will find what it needs.

This is also how you escape the "toyware" or "not ready for serious use" label that kills startup recommendations. Models are cautious about recommending unknown products. Clear documentation—especially around security, compliance, customer proof, and implementation—reduces that uncertainty.

Why this works: You're already differentiated

Most startups don't have a product problem or a positioning problem. They have a discoverability problem.

If you've built a real startup, you've already done the hard part. You know your niche. You've built something better than the incumbent for a specific customer with a specific constraint. You don't need to persuade the model you're better than Salesforce for everyone—you just need to persuade it you're better for this particular buyer with this particular workflow.

The AI era doesn't change your core differentiation. But it does change how you solve for visibility.

The path isn't long-tail SEO. It's long-tail earned media plus clear documentation.

Get mentioned in the scattered directories, forums, and niche publications that models are consulting during their initial research. Show up in enough places that when the model searches your category, you make the shortlist. Then have clear enough documentation that when the question gets specific to your niche, the model finds the evidence it needs to recommend you confidently.

Do that, and you go from invisible to recommended—not because you tricked the model, but because you made it easy for the model to see what you've built and who it's for.

Want to see how AI models currently think about your startup? We can show you in 30 minutes. Book a consult.

Keller Maloney

Unusual - Founder

Share