Blog
Insights
How LLMs Evaluate Brands
AI models are becoming influencers. They make product recommendations, refer customers, and even let people buy within the app. How do they choose one brand over another?

Keller Maloney
Unusual - Founder
Oct 6, 2025
How LLMs Evaluate Brands: Signals, Sources, and Pitfalls
Modern assistants build answers the way good analysts do: they collect evidence, check it against trusted sources, resolve conflicts, and present the most defensible conclusion. If your content looks like easy-to-cite evidence—not marketing—you’ll get quoted (and referred).
The Model Research Workflow (What Actually Happens)
LLMs retrieve candidate pages from search indexes and their own memory, skim for definitions, facts, and tables, compare across sources to find consensus, weight provenance (domain trust, authoritativeness, stability), and construct an answer that cites or paraphrases the strongest evidence. Fresh, unambiguous, corroborated pages win.
The Source Graph (Who Gets Trusted First)
First-party canon (your AI Company Wiki) anchors the facts. Third-party authorities (standards bodies, trade associations, respected directories, docs/READMEs, government/regulatory sites, reputable trade press) corroborate or contest those facts. Models prefer a tight cluster: one canonical page plus a few independent confirmations.
Page-Level Trust Signals (Make These Obvious)
Definition-first summaries, datum-dense tables, explicit limitations, stable slugs, visible changelogs, consistent terminology, JSON-LD that matches on-page facts, clear authorship/ownership, fast HTML, sparse JS, and outbound citations to primary sources. The more your page reads like documentation, the easier it is to quote.
Disqualifiers (Why “Great Content” Gets Ignored)
Vague marketing tone, multiple intents on one URL, contradictions across your site/blog/PDFs, heavy client-side rendering that hides content from parsers, PDFs with no HTML twin, shifting slugs that break prior citations, and orphan pages with no internal links. Any one of these can push a model to cite someone else.
Five Steps to Make Your Brand Citeable
Inventory and canonicalize. Map high-intent questions to one URL each; kill duplicates; fix canonicals; expose
lastmod
.Convert to reference. Rewrite priority pages as documentation: definitions, tables, examples, limitations, references, changelog.
Corroborate externally. Place concise versions of key facts on trusted third-party surfaces; cross-link back to the canonical page.
Normalize terminology. Establish a glossary; align numbers, names, and claims across site/blog/PDFs; prefer units over adjectives.
Maintain freshness. Set SLAs per page type (pricing monthly, security quarterly); log changes publicly to accelerate recrawl and boost perceived reliability.
Archetypes Models Love (Use One Intent per URL)
Product overview (what it is, who it’s for, capabilities, limits), pricing & eligibility (matrix, examples, exclusions), security & compliance (controls, certifications, subprocessors), integrations (versions, steps, constraints), comparisons (criteria-first tables), single-question FAQs (one question, one answer, one URL).
Evidence Hygiene (Small Habits, Big Impact)
Lead with the answer. Put the decision table above the fold. Cite primary data inline. Link to sibling pages a buyer will “need next.” Add a “Known Limitations” section. Keep anchors and IDs stable. Publish a public “What’s New” roll-up to show steady stewardship.
Diagnosing Why You’re Not Getting Cited
If competitors are quoted and you’re not, check intent collision (too many topics on one page), slug instability, missing HTML twins for PDFs, contradictory numbers, or weak above-the-fold definitions. Compare your page anatomy to theirs—models often choose the cleaner reference, not the bigger brand.
Measurement That Matters
Citation concentration (share of assistant citations landing on canonical URLs), qualified citation velocity (net new authoritative citations per quarter), AI referral share (sessions landing on wiki/reference URLs), bridge click rate (CTA engagement from cited pages), and consistency score (percentage of top intents with one canonical page and no contradictions).
Quick Wins in 14 Days
Refactor two high-intent guides into single-intent reference pages, add definition + decision table at top, implement JSON-LD, expose lastmod
, publish HTML twins for any high-traffic PDFs, and place two corroborating blurbs on trusted third-party sites that already rank for your topic.
FAQ (Short, Practical)
Do models prefer .gov/.edu? They’re weighted as strong corroborators, but clean, consistent first-party docs will still be cited—especially when supported by reputable third parties.
Are PDFs bad? Not inherently, but always provide an HTML twin; parsers and recrawl/freshness signals work better in HTML.
How much schema is enough? Align to page purpose (Article, FAQ, HowTo, CaseStudy) and ensure fields reflect on-page facts; over-templating without matching content backfires.
Do backlinks still matter? Indirectly—they’re part of authority signals—but content clarity, consensus, and freshness usually decide citations in assistant answers.
Can we influence which URL gets cited? Yes: one intent per URL, stable slugs, definition-first intros, and internal links that point assistants to your canonical entry.
Bridge to Pipeline (Don’t Stop at the Citation)
Every reference page should include “next step” modules—Compare vs. X, Pricing Request, Security Review Pack, Book a Consult—using crawlable links and UTMs. Citations are referrals when the bridge is obvious.
Checklist
One intent per URL.
Definition + decision table above the fold.
Stable slug, canonical tag, visible changelog, lastmod
in sitemap.
Schema that mirrors on-page facts.
Two independent corroborations pointing back to canon.
Fast HTML; JS optional, not required.