AI chatbots are the influencers of the future. When people have a buying question—Which tools fit my stack? What’s best for a team like ours?—they now start by asking an assistant like ChatGPT. That shift creates a new kind of marketing challenge. It’s no longer enough to appear next to your competitors on a search results page. In a multi-turn conversation, the model weighs context, asks follow-ups, and then recommends something. The decisive moment isn’t visibility—it’s preference.

At Unusual, we focus on that moment. We call it AI Brand Management—brand marketing for AI. Yes, you still need to be seen. AEO/GEO tools can help with visibility and can be useful for understanding where you show up. But these approaches tend to oversimplify this challenge by treating AI visibility like SEO.

The reality is that AI models behave a lot more like humans—with their own opinions and personalities—than search engines. They are more like influential reviewers than a SERP. Rather than treating prompts like keywords and treating chat like search, the real work is shaping how the AI model understands your brand. This way, when the conversation gets specific, you’re the one it chooses.

Why visibility is necessary—but not sufficient

The traditional search mindset is single-turn: a search query produces an ordered list, and you fight to climb a slot or two. Chat is different. Buyers start broad—“What are the best options for X?”—then narrow—“Which integrates natively with Microsoft 365?”—and often narrow again—“Okay, which one is friendlier for non-technical teams?” By the time a choice is made, the model has accumulated context about the buyer’s stack, constraints, and preferences. In that setting, a first-turn mention tells you very little about who will ultimately be recommended.

This is why we say there are two hurdles to AI-driven growth: first, be visible; second, be recommended. Most vendors stop at the first. We specialize in the second.

What 'AI Search'/AEO/GEO gets right—and what it misses

The analytics emerging around “AI search” does offer something useful. It’s helpful to know how often you’re mentioned in response to representative prompts, where you sit relative to competitors, and which topic families seem to include you.

However, reducing this problem to "AI Search", and then applying SEO-like solutions like AEO and GEO, misses the full picture and ignores half of the problem. Pretending like prompts = keywords ignores the nuanced reality. In the real world, users refine, the model reasons, and the final answer depends on the follow-ups—the messy middle of a conversation (the average ChatGPT conversation is eight messages). Prompt shares are a useful approximation of visibility, not a read on what wins the recommendation when the buyer adds their constraints.

Treat models like a new, high-leverage audience

Anyone who has ever tried more than one chatbot—ChatGPT, Claude, Grok, Gemini—notices that each model has its own "personality" (especially Grok). Perhaps unsurprisingly, these AI models also develop a kind of latent opinion about brands: who you’re for, what you’re good at, whether you’re credible, how you compare. They are more like influencers than search engines. That’s good news. Influencers don’t need to be tricked; they need to be convinced. If you give them clear claims, real proof, and consistent corroboration across sources, they reward you with trust. And in chat, trust looks like recommendations.

How models form opinions

Models build their view of you from whatever they can read: product docs, integration guides, comparisons, pricing pages, security notes, case studies, reviews, developer threads, and credible third-party sources. What consistently helps is clarity, specificity, and corroboration. Titles that say the literal thing. Claims that are concrete rather than lyrical. Proof that matches the claim. The more your properties (and the web around you) tell the same true story, the more confidently a model can retrieve and defend that story when a buyer pushes for detail. At Unusual, we call this holistic view of your brand your "AI Presence."

Over time, this coalesces into a durable narrative: great for startups but not enterprise-ready; native Microsoft fit; premium service justified by outcomes; excellent for regulated industries but heavier to implement, and so on. Your job is to decide which of those opinions you need to change—and then change them.

Our method: AI Brand Management

We built Unusual from the ground up on a simple sequence: Diagnose → Design → Deploy → Monitor.

Diagnose. We begin with an “AI brand survey”: how do models currently describe you, compare you, and recommend you across scenarios that matter? Where do they place you on axes like premium vs. budget, enterprise vs. startup fit, depth vs. simplicity? This surfaces opinion gaps—those frustrating, slightly-off assumptions that cost you recommendations.

Design. From there, we choose the few opinions that matter for your goals. If you need to be seen as “enterprise-ready,” the content strategy must show why—not assert it. If Microsoft-native workflow is your edge, the model needs to see what “native” means in literal terms: versions, file types, steps. We craft proof-rich arguments that move those opinions, grounded in detail a model can parse and reuse.

Deploy. We publish on surfaces models can crawl reliably. Many teams opt for an AI-friendly subdomain (e.g., ai.<brand>.com) so they can iterate without risking their main IA or design language. Others prefer to keep everything in their CMS. Either way, the principle is the same: make it easy for the models to read (use literal titles, plain structure, and link to corroborating sources so the model sees a coherent, consistent signal).

Monitor. We measure recommendation share in representative scenarios, track referral growth, and watch for opinion deltas—movement in the specific assertions that matter to you. When the model’s view shifts, we refresh. The work compounds.

The kinds of content that change minds

A model can’t recommend you confidently if it can’t explain why. The assets that move the needle are the ones that make explanation easy:

  • Head-to-head comparisons that are fair, literal, and honest about tradeoffs. When should someone pick you? When shouldn’t they?

  • Fit narratives that say who you’re for and who you’re not for—by company size, stack, and use case—so the model can match you to the right buyer.

  • Integration proof that spells out “native” in specifics: the objects, formats, versions, and steps that remove ambiguity.

  • Credibility assets that reduce uncertainty: customer logos and quotes, audits, security notes, implementation timelines.

  • Pricing logic that explains the “why” behind premium or budget positioning. Models repeat reasoning, not just numbers.

None of this is about gaming a system. It’s about putting the best, most legible version of the truth in places the model can find and reuse.

Where to publish

Models read the open web. You’ll get the best results where crawling is reliable, iteration is fast, and claims are kept in sync across properties. A dedicated subdomain is often the most pragmatic path because you can ship and update quickly as the model’s view evolves, and it keeps the AI-optimized content away from your human and search-engine content. If you prefer to keep everything centralized, that is fine too. The non-negotiables are literalness and consistency: title pages for what they are, keep structure clean, and make sure your claims agree with your docs, product UI, and third-party mentions.

Measuring what actually moves the needle

If the goal is recommendations, measure recommendations. We look at recommendation share—in realistic situations, how often does the model choose you? We track referral growth—the pipeline influenced by model-initiated journeys. And we monitor opinion deltas—are the model’s key assertions shifting in the direction you want (for example, from “startup-only” to “enterprise-ready”)? Where it’s feasible, we also add lightweight conversation diagnostics: intake notes or self-report that capture how often a buyer heard about you from an AI assistant. It’s imperfect but directional, and it keeps the team focused on the right work.

A short case study: winning on the follow-up question

One of Unusual's first customers is a legal-tech startup called Parley, which builds AI tools for immigration lawyers. The following is a real-world example of a chat that ended with a Parley recommendation.

A prospective buyer started with a broad ask about visa-application drafting tools. ChatGPT did what you’d expect: it sketched the landscape and named a few familiar products. Parley wasn’t in that first sweep—typical in a crowded category where general answers gravitate to well-known suites.

Then the buyer added the constraint that matters for their workflow: they wanted to draft directly inside Microsoft Word. On that follow-up, the model searched the web, found this AI-optimized page of Parley's, and then pivoted. It recommended Parley’s Word add-in and described, in plain terms, how it fits a legal team’s day-to-day drafting inside Word.

What changed wasn’t Parley’s brand awareness; it was the fit criterion plus the presence of clear, crawlable content that spelled out exactly that Word-native use case. Parley had a page (hosted on the Unusual subdomain) that literally framed the question—Word-native immigration drafting vs. general assistants—and backed it with concrete details. Faced with a specific constraint and specific proof, the model revised its recommendation and chose Parley.

That is the pattern: the first answer gets you into the conversation; the second question—anchored to a buyer constraint—decides the recommendation. Visibility earns a mention. Opinion, supported by precise, verifiable content, earns the win.

Enterprise vs. startup: different hurdles, same playbook

Startups fight to be seen. As a startup, it is often impactful to earn inclusion in the third-party sources that AI models are consulting upon their initial searches. This is where free directories like G2 and Reddit can be useful. Once a startup is routinely "seen," they often need to escape the “toyware” label with concrete proof: credible customers, implementation timelines, compliance, and platform-grade signals.

Enterprises are visible by default; their challenge is fit clarity. Without careful stewardship, models can default to “legacy” or “overkill” for many contexts. In both cases, the job is to pick two or three opinions that most block recommendations and concentrate on moving them with specific, corroborated content.

Ethics, accuracy, and long-term brand value

You can’t bluff your way to durable recommendations. Models triangulate. If you oversell an integration, the rest of the web—or your own docs—will contradict you. This is a negative signal to an AI model the same way that it is a negative signal to a human. The right strategy is simpler and more powerful: publish the clearest, most honest articulation of where you shine; name the tradeoffs; show your work. Align the model’s opinion with reality, and you’ll earn both AI trust and human trust that compounds.

How we work with teams

Some teams want software that helps them diagnose opinion gaps, plan the narrative, generate model-friendly assets, and keep them fresh over time. Others prefer a white-glove engagement where we run the research and analysis, produce and maintain the content, and report on recommendation share and referral lift. Either way, the aim is the same: change the model’s opinion in the places it matters, then keep it aligned as your product and market evolve.

How to start

The first step of any good sales process is understanding the target customer. We take the same approach to "selling" to AI models. In a short working session: we can diagnose how LLMs think and talk about your brand. Then, we (collaboratively) identify at least one opinion that you would like to change that will improve your AI presence and visibility. Book a consult.

Unusual helps brands understand—and change—the way AI models think and talk about them. On average, our customers are growing their AI referrals roughly 40% month over month.

Keller Maloney

Unusual - Founder

Share