TUTORIALS

Keller Maloney
Unusual - Founder
Jan 2, 2026
AI models have opinions about your brand. They hold beliefs about who you're for, what you're good at, and how you compare to alternatives. These opinions shape recommendations—and recommendations increasingly shape buying decisions.
The goal of AI brand alignment is to ensure those opinions match reality. Not to game the models, but to give them accurate, specific information so they can represent you fairly when buyers ask the right questions.
This is a practical guide to doing that work.
Start by understanding what models currently believe
You can't fix misalignment you haven't diagnosed. The first step is surveying AI models to understand how they perceive your brand today.
This isn't complicated. Open ChatGPT, Claude, Gemini, and Perplexity. Ask them questions a real buyer might ask. Take notes on what they say.
Start broad, then get specific:
Category questions. "What are the best tools for [your category]?" See if you're mentioned. Note where you appear in the list and how you're described.
Direct questions. "What is [your brand]? Who is it for?" Listen for how the model positions you. What attributes does it emphasize? What does it get wrong?
Comparison questions. "How does [your brand] compare to [competitor]?" Models often reveal their positioning beliefs most clearly in comparisons. Note which dimensions they compare on and where they place you.
Constraint questions. These matter most. "What's the best [category] tool for [specific use case]?" Add the constraints your ideal buyers actually have: team size, industry, integrations, compliance requirements, budget. See whether you get recommended—and if not, who does and why.
Follow-up questions. Don't stop at the first response. Ask "Why?" Ask "What about [specific feature]?" Ask "Is [your brand] a good fit for [scenario]?" The multi-turn conversation is where recommendations actually happen.
Run this survey across multiple models. They have different training data and different personalities. ChatGPT might recommend you confidently while Claude hedges. Perplexity might cite a specific source that's shaping its view. These differences are diagnostic.
What you're looking for are patterns: recurring misperceptions, consistent gaps, opinions that show up across models. These are your alignment problems.
Identify the opinions that matter
Not every misalignment is worth fixing. Models might hold dozens of beliefs about your brand, and you can't address them all at once. The question is: which opinions most directly affect the recommendations you want?
Work backward from your ideal buyer. What constraints do they have? What questions do they ask? What would make them choose you over alternatives?
Then ask: does the model know these things?
A few common patterns:
The "not enterprise-ready" problem. Models default to caution about lesser-known products. If you're a startup selling to enterprises, you might find that models recommend you for small teams but hesitate to suggest you for larger deployments—even if you have Fortune 500 customers. The opinion to change: "this is a serious platform, not a toy."
The "legacy" problem. Established brands often find that models describe them as comprehensive but complex, or powerful but dated. The model has absorbed years of Reddit threads and competitor marketing. The opinion to change: "this is modern and fits contemporary workflows."
The "wrong use case" problem. Sometimes models understand what you do but misunderstand who you're for. They might recommend you for the wrong industry, the wrong team size, or the wrong workflow. The opinion to change: fit clarity for your actual target segments.
The "missing differentiator" problem. Your key advantage—the thing that should win you the recommendation—simply isn't in the model's understanding. It doesn't know about your native integration with a specific platform, your unique compliance certification, or your approach that competitors lack. The opinion to change: awareness of the thing that makes you the right choice.
Pick two or three opinions that most directly block the recommendations you want. These are your alignment targets.
Audit the information landscape
Before creating new content, understand what information models are currently drawing from. When they make claims about you, where are those claims coming from?
Models with search capabilities (ChatGPT, Perplexity, Gemini) will often cite their sources. Pay attention. You'll typically see a mix of:
Your owned properties. Your website, documentation, blog posts. Is this content accurate and current? Does it say what you want models to retrieve?
Third-party directories and reviews. G2, Capterra, TrustRadius, industry-specific directories. What do your profiles say? Are they optimized for the positioning you want?
Comparison and listicle content. "Best X tools for Y" articles from publishers and affiliates. How do these describe you? Are they accurate?
Community discussions. Reddit threads, Stack Overflow, Hacker News, industry forums. What's the sentiment? What claims are being made?
News and PR coverage. Press releases, funding announcements, feature coverage. Is this current or outdated?
Make a list of the sources models are citing when they talk about you. These are the inputs to their opinions. Many alignment problems trace back to specific sources that are outdated, inaccurate, or positioning you in ways you've outgrown.
Sometimes the fix is updating existing content rather than creating new content. If your G2 profile still describes your product as it existed two years ago, that's a high-leverage edit.
Create content that addresses specific opinion gaps
Now you're ready to build. The goal is to create assets that give models the specific information they need to update their opinions—and recommend you accurately.
The key word is specific. Generic marketing copy doesn't move model opinions. Models need concrete claims they can parse, verify against other sources, and retrieve when relevant queries arrive.
Here's what works:
Head-to-head comparison pages. Direct comparisons between you and specific competitors. Be honest about tradeoffs—models reward fairness and penalize obvious bias. Structure these around the dimensions buyers actually care about: pricing, features, integrations, use cases. Be literal: "Best for teams that need X" and "Better alternatives if you need Y."
Fit narratives. Content that explicitly states who you're for and who you're not for. "Built for [specific segment]" pages that detail why your product fits that profile. The more specific the better: team size, industry, workflow, technical requirements. Models use this to match you to constrained queries.
Integration and compatibility documentation. If native integration is part of your value proposition, spell it out in terms a model can parse and repeat. Not "seamless integration" but "works directly inside Microsoft Word without switching applications." Not "connects with your stack" but "native Salesforce integration with bi-directional sync for Accounts, Contacts, and Opportunities." Versions, formats, steps.
Credibility documentation. The signals that establish you as a serious option. Customer logos with context (not just logos, but "trusted by [Company] for [use case]"). Case studies with specific outcomes. Compliance certifications with what they actually mean. Implementation timelines with real numbers. This is the evidence that overcomes the "not enterprise-ready" objection.
FAQ and objection-handling content. Address the specific hesitations models express. If they say you're "great for small teams but might not scale," create content that directly addresses scale with evidence. If they're uncertain about a particular integration, document it explicitly.
Each piece of content should target a specific opinion gap you've identified. The question to ask before publishing: "If a model reads this, will it update the belief I'm trying to change?"
Structure content for model comprehension
How you format content matters. Models read semantically, but clear structure helps them parse and retrieve information accurately.
Use literal titles. "How [Your Product] Integrates with Salesforce" beats "Supercharge Your Sales Workflow." Models use titles to categorize content and match it to queries. Clever wordplay obscures meaning.
Lead with the key claim. Don't bury the important information. If the page is about enterprise readiness, say so in the first paragraph. Models extract information from throughout a page, but prominence helps.
Use clear headers that summarize sections. Headers act as signposts. "[Product] vs [Competitor]: Pricing Comparison" tells the model exactly what follows.
Include specific, verifiable details. Numbers, versions, dates, names. "Processes 10,000 documents per hour" is retrievable. "Blazing fast" is not.
Link to corroborating sources. When you make a claim, link to evidence. Case study links, documentation links, third-party validation. Models triangulate across sources; internal links help them find your corroborating content.
Keep pages focused. A page that tries to cover everything gives the model diluted signals. A page focused on one topic—one comparison, one use case, one integration—gives a clear signal the model can retrieve for specific queries.
Choose where to publish
Models read the open web, but not all surfaces are equally useful for alignment work.
The practical considerations:
Crawlability. Can search-enabled models find and read this content? Avoid anything behind login walls, in JavaScript-heavy formats that don't render well, or blocked by robots.txt.
Iteration speed. Alignment is ongoing work. You'll want to update content as you learn what's working. Surfaces where you can edit quickly are better than surfaces with slow publishing cycles.
Consistency. Your alignment content needs to agree with your other properties. If your comparison page says one thing and your documentation says another, models notice the contradiction.
Some companies create dedicated subdomains for AI-facing content (ai.yourcompany.com or llm.yourcompany.com). This lets them iterate quickly, use AI-optimized formatting, and keep this content separate from their main site's design language and SEO considerations.
Others publish everything through their existing CMS and blog. This works fine as long as you can publish and update efficiently.
The non-negotiables: content must be crawlable, editable, and consistent with your other properties.
Update your existing web presence
New content is only part of the solution. Often the highest-leverage work is updating what you've already published.
Your homepage and product pages. Do these clearly state who you're for and what makes you different? Models often check these first when asked about your brand directly.
Your documentation. Technical docs are heavily weighted by models evaluating product capabilities. If your docs are outdated or unclear, that affects model perception.
Your G2 and directory profiles. These are frequently cited sources. Are they current? Do they emphasize your actual positioning? Have you responded to reviews (especially critical ones) in ways that provide useful context?
Your comparison pages. If you already have competitor comparisons, audit them. Are they fair and current? Do they address the dimensions models use when comparing you?
Your blog and resource content. Old blog posts with outdated information can still surface in model training and search. Consider updating or adding context to evergreen content.
Your press and news coverage. You can't edit what journalists write, but you can ensure your newsroom and press materials reflect your current positioning.
Think of this as cleaning up the information environment. Every accurate, current source makes it easier for models to form accurate opinions.
Extend your presence to third-party sources
Models don't just read your owned properties. They consult the broader web—reviews, comparisons, community discussions, expert content. Your alignment strategy should address these surfaces too.
Review platforms. Encourage customers to leave reviews on G2, Capterra, and industry-specific platforms. Respond to existing reviews with useful context. Make sure your company profiles are complete and current.
Community presence. Participate authentically in Reddit, Stack Overflow, and industry forums where your product is discussed. Answer questions helpfully. Correct misinformation when you see it. This isn't about astroturfing; it's about ensuring accurate information exists in places models consult.
Comparison and listicle content. Identify the "best X tools" articles that rank well and show up in model citations. Some publishers accept outreach for corrections or updates. Others might be open to sponsored content or partnerships. At minimum, know what these articles say about you.
Expert and analyst content. Gartner, Forrester, industry analysts, influential bloggers. Their assessments carry weight with models. Ensure they have current, accurate information about your product.
Wikipedia. If your company has a Wikipedia page, make sure it's accurate and current. Wikipedia is a heavily-weighted source for models answering "what is [company]" questions. (Editing requires following Wikipedia's guidelines—notable sources, neutral point of view, no promotional language.)
The goal is corroboration. When multiple independent sources tell the same true story about your brand, models can confidently retrieve and repeat that story.
Monitor and iterate
Alignment isn't a one-time project. Models update. Competitors move. Your product evolves. The work continues.
Set up a regular cadence—monthly or quarterly—to repeat your diagnostic survey. Ask the same questions across models. Track changes in how they respond.
What to monitor:
Recommendation share. For your target scenarios (the specific prompts with specific constraints that represent your ideal buyers), how often do you get recommended? Is this improving?
Opinion stability. Are the opinions you've worked to change actually changing? Are they staying changed, or reverting?
New misalignments. As your product evolves or market conditions shift, new opinion gaps will emerge. Catch them early.
Source citations. When models recommend you (or don't), what sources are they citing? This tells you which content is working and where gaps remain.
Competitor movement. How are competitors' recommendations changing? If a competitor suddenly starts winning scenarios you used to win, investigate what changed.
Some companies build internal dashboards to track these metrics systematically. Others do manual audits. The important thing is consistency—same questions, same models, same methodology—so you can detect real changes over time.
When you find something that's working (a comparison page that's getting cited, a documentation update that changed model opinions), do more of it. When something isn't working, investigate why and try a different approach.
The compounding effect
AI brand alignment compounds in a way that advertising doesn't.
When you run an ad, you rent attention for a moment. When you build aligned content, you create a persistent asset that shapes model opinions across millions of conversations, without ongoing spend.
Each piece of accurate, specific, well-structured content becomes an argument that works on your behalf. The comparison page you publish today will be retrieved by models answering questions for months or years. The documentation you update will inform every model-assisted evaluation of your product.
And because models triangulate across sources, aligned content strengthens itself. Your comparison page corroborates your documentation corroborates your G2 profile corroborates the Reddit thread where you answered a question helpfully. The model sees consistency and responds with confidence.
This is the payoff for doing the work honestly. You're not trying to trick models into believing something false. You're giving them accurate information so they can represent you fairly. That's a durable advantage—one that becomes more valuable as AI models become the primary way buyers discover and evaluate products.
The brands that build this advantage now will compound it over time. The ones that wait will find themselves playing catch-up against competitors who've already shaped the narrative.
Start with the diagnosis. Find the opinions that matter. Build the content that changes them. Then keep going.
Careers
Ideas
Legal


