Blog
Tutorials
How Enterprises Lose AI Recommendations (And How to Fix It)
If you're an enterprise software company, you don't have a visibility problem. Instead, you must contend with a specificity problem. When a user asks ChatGPT about the best project management tools, ChatGPT will almost always consider your solution. However, your goal is to be recommended. Getting recommended means that ChatGPT thinks that your solution isn't "overkill" or "overly complex" or "legacy", but that it perfectly solves the user's needs. This means publishing clear documentation about every use case of your tool and making it available to LLMs.

Keller Maloney
Unusual - Founder
Nov 5, 2025
If you're an enterprise software company, you don't have a visibility problem. When someone asks ChatGPT "What are the best CRM platforms?" or "What tools should I use for project management?", your name comes up. You're established. You have brand recognition. You're in every directory, every listicle, every comparison article the model searches.
Congratulations. You made the shortlist.
Now comes the hard part.
Because making the shortlist doesn't mean you win the recommendation. In fact, for enterprise brands, being visible is often where the trouble starts. The model knows your name—it just doesn't think you're the right fit.
The multi-turn bake-off
Here's how AI recommendations actually work: Someone starts broad—"What's the best project management software?"—and the model does what you'd expect. It searches, finds the usual suspects, and lists 5-10 options including yours. So far, so good.
Then the conversation continues:
"We're a 15-person startup. Which of these is simplest to set up?"
Or:
"We need native Slack integration and real-time collaboration."
Or:
"Which one is best for agile teams that don't want to pay for features we won't use?"
The model searches again—often with remarkably specific queries. It's looking for proof that matches the constraint. And this is where enterprise brands hemorrhage recommendations.
Not because you can't do what they're asking. You probably can. Your platform likely has every feature they mentioned and fifty more. But when the model searches for "simple setup 15-person team" or "lightweight agile tool," what it finds—or doesn't find—determines whether it recommends you or pivots to a competitor.
And the thing it tends to find about enterprise platforms? Legacy. Overkill. Built for a different era.
The latent enterprise opinion
Models develop a kind of opinion about brands over time. They read your documentation, your case studies, your third-party reviews, your support forums. They see which customers you showcase, which industries you emphasize, which problems you say you solve.
From all this, they form a narrative. And for most enterprise platforms, that narrative sounds something like:
"Comprehensive but complex"
"Enterprise-grade but overkill for smaller teams"
"Feature-rich but steep learning curve"
"Built for Fortune 500s"
"Legacy platform with modern features bolted on"
None of this is necessarily wrong. But it's incomplete. And incomplete narratives cost you recommendations.
When a buyer adds a constraint—"We're a small team," "We need something simple," "We want modern, not legacy"—the model weighs its latent opinion against the specific ask. If all it knows about you is "comprehensive enterprise platform," and the buyer just said they want "lightweight and simple," the model pivots. It recommends the focused startup tool instead.
You lose not because you can't serve the use case, but because the model doesn't know you can.
The feature catalog trap
The instinct for most enterprise marketing teams is to emphasize breadth: "We do everything." This makes sense in traditional marketing. More capabilities = more reasons to buy.
But in agentic search, breadth without specificity is a liability.
When a model searches for "project management simple setup small teams," it's not looking for a list of every feature you have. It's looking for proof that you're good at this specific thing. If all your documentation emphasizes how comprehensive you are—how many industries you serve, how many use cases you cover—you don't match the search intent. The model reads "built for everyone" and concludes "not built for me."
Meanwhile, the startup competitor has a single page that says: "Project management for agile startups. Set up in 15 minutes. Built for teams of 5-50." The model finds this, recognizes the fit, and recommends them.
Your 47 features don't matter if the model can't find clear documentation on the one feature that matches the buyer's constraint.
The fix: Use case clarity, not feature breadth
The solution isn't to simplify your product (you're an enterprise platform—complexity is often the point). The solution is to document the specific scenarios where you shine with the same clarity that startups use to describe their entire product.
This means creating documentation that says:
Not this:
"Our platform offers robust project management capabilities with advanced customization options, enterprise-grade security, and integration with hundreds of tools."
But this:
"For agile startups (5-50 people): Set up in 20 minutes. Native Slack and GitHub integration. Kanban boards, sprint planning, and real-time collaboration. No enterprise features you don't need. $49/month for teams under 25."
And then, separately:
"For regulated industries: SOC 2 Type II, HIPAA compliant, GDPR ready. Role-based access control, audit logging, data residency options. See our compliance documentation."
And separately again:
"For distributed teams: Async-first workflows, time zone-aware scheduling, mobile-native experience. Works offline, syncs when connected."
Each of these is a different buyer with different constraints. Your platform can serve all three. But if the model only finds generic "enterprise platform" messaging, it won't confidently recommend you for any of them.
How to document for the bake-off
When a model searches for proof during that second or third turn—after the buyer has added their constraints—it's looking for specific, literal, verifiable content. Here's what works:
1. Scenario-based documentation
Create pages for specific use cases, not just feature categories. Not "Collaboration Features" but "Project Management for Remote Teams" or "Agile Workflows for Startups" or "Enterprise Resource Planning for Healthcare."
Title them literally. Use the words your buyers use. If they search "simple CRM for small business," have a page called exactly that—not "Small Business Solutions" or "SMB Features."
2. Clear constraints and fit signals
Say who you're for and who you're not for in each scenario. Company size. Industry. Tech stack. Budget. Implementation timeline. When a model reads "built for teams of 10-100 with existing Microsoft 365 deployments," it knows exactly when to recommend you.
3. Proof, not promises
The model needs specifics it can repeat back to the user. Not "easy setup" but "configure in 45 minutes with our guided workflow." Not "integrates with popular tools" but "native two-way sync with Slack, Teams, Jira, GitHub, and Salesforce—see our API documentation."
4. Honest tradeoffs
Models reward clarity, including clarity about what you're not. If you're premium, say why the price is justified. If you're feature-rich, acknowledge the learning curve and show how you mitigate it. If you're built for scale, don't pretend you're also the simplest option for solopreneurs.
This honesty doesn't cost you recommendations. It focuses them. The model stops recommending you to buyers who are bad fits (who would churn anyway) and recommends you more confidently to buyers who are good fits.
A specific use case: Microsoft-native workflows
One pattern we see repeatedly: enterprise platforms that integrate with Microsoft 365 but don't clearly document what "native Microsoft integration" actually means.
The buyer adds the constraint: "We live in Microsoft. Everything needs to work inside Teams and SharePoint."
The model searches. It finds that you "integrate with Microsoft 365," but it also finds the same claim from six competitors. What it doesn't find—unless you've documented it clearly—is:
Which Microsoft products you integrate with (Teams? SharePoint? OneDrive? Power BI? Azure AD?)
What "integrate" means (sync files? embed UI? SSO? API access?)
What workflows this enables (edit documents in Teams without leaving? Trigger workflows from SharePoint? Auto-provision users from Azure AD?)
What technical requirements exist (which Microsoft 365 plans? which API versions?)
Without this specificity, "integrates with Microsoft" means nothing. With it, the model can confidently say: "For teams embedded in the Microsoft ecosystem, [your platform] offers native integration with Teams, SharePoint, and Azure AD, allowing you to [specific workflow]. This means you can [specific benefit]."
That's a recommendation. The generic "integrates with Microsoft" claim is not.
Where to publish this content
Models read the open web. The best results come from content that's:
Crawlable: On your domain or a subdomain (like ai.yourcompany.com)
Literal: Titles that say exactly what they are
Specific: Use cases, not feature lists
Linked: Connected to your main docs, API references, and third-party sources that corroborate your claims
Many enterprise teams worry about creating AI-specific content that's separate from their main marketing site. The concern is: "Won't this create inconsistency?"
The answer is no—if you approach it correctly. AI-optimized content shouldn't contradict your marketing. It should complement it by adding the layer of specific, scenario-based documentation that makes you recommendable in multi-turn conversations. Think of it as technical documentation for specific buyer workflows, not a separate brand message.
Measuring what matters
If the goal is winning the multi-turn bake-off, measure that. Track:
Recommendation share in constrained scenarios: When buyers add the constraints you care about, how often does the model choose you? Test scenarios like "for startups," "for regulated industries," "for Microsoft-native teams," etc.
Opinion deltas: Is the model's narrative shifting? Are you still described as "enterprise-grade but complex" or are you now "enterprise-grade with streamlined workflows for smaller teams"?
Constraint-match rate: How often can the model explain why you're the right fit for a specific scenario versus giving a generic "they're a leader in the space" answer?
The goal isn't to win every scenario. It's to win the ones you actually serve well—and to lose quickly and clearly on the ones you don't.
Enterprise advantages (yes, you have them)
This might sound like bad news for enterprises, but it's not. You have structural advantages that startups don't:
1. Proof at scale
You have hundreds or thousands of customers. Use them. Case studies that show specific use cases—"How a 50-person fintech startup uses [your platform]" or "How [specific industry] teams use [specific feature]"—give models concrete proof to cite.
2. Established integrations
Your ecosystem is probably deep. Document it specifically. Not "integrates with 500+ tools" but "native integration with Salesforce: two-way sync, custom field mapping, real-time updates. See our Salesforce integration guide."
3. Compliance and security
Regulated industries have constraints you've already solved. SOC 2, HIPAA, GDPR, FedRAMP—document these clearly, and you win recommendations for buyers who need them.
4. Professional services and support
Implementation, training, dedicated support—these matter for certain buyers. Don't bury them in generic "contact sales." Spell out what you offer: "45-day implementation with dedicated CSM" or "24/7 phone support with 2-hour SLA."
These are assets. The problem is only that they're often documented generically ("we offer support") instead of specifically ("for enterprise customers: dedicated CSM, custom onboarding, and 24/7 phone support with 2-hour SLA").
Enterprise vs. startup: complementary problems
Startups fight to get into the conversation. Enterprises fight to stay in it after the buyer adds constraints.
Both problems are solvable. Startups need earned media to build the shortlist. Enterprises need specific use case documentation to win the bake-off.
The irony is that each group envies the other's position. Startups wish they had your brand recognition. You wish you had their agility and clear positioning. The reality is simpler: you both need to meet the model where it is. For startups, that's at the top of the funnel (visibility). For enterprises, it's in the middle (fit clarity).
Start with one use case
You don't need to document every possible scenario your platform serves. Start with one:
What's a buyer constraint that eliminates you today but shouldn't?
What's a use case you serve well but aren't known for?
What's a specific workflow where you're legitimately better than the focused startup alternative?
Pick one. Write the clear, specific, proof-rich documentation that makes it obvious you're the right fit. Publish it where models can find it. Measure whether the model's opinion shifts.
Then do it again.
Over time, this compounds. The model develops a more nuanced understanding of your platform: not just "enterprise tool" but "enterprise tool that also serves startups well" or "legacy brand with modern, streamlined workflows" or "comprehensive platform with surprisingly simple setup for [specific scenario]."
That nuance is what turns visibility into recommendations.
Want to see how AI models currently talk about your enterprise platform? We can show you where you're losing recommendations—and why. Book a consult.



