Blog
Insights
The Attribution Problem in the Age of AI Traffic
AI referrals disrupt the model of last-click attribution. A referral from ChatGPT is more like a referral from a friend than a referral from an SEO blog or an ad click. Learn how to contend with this change.

Keller Maloney
Unusual - Founder
Sep 22, 2025
The Attribution Problem
Most assistants don’t send you tidy referrals; they send you demand. A user asks ChatGPT or Perplexity a question, your brand shows up in the answer, and—crucially—they rarely click the citation. Instead, they open a new tab, type your name, or navigate straight to your site. Influence happens immediately, but outside the classic referral flow. That makes attribution feel slippery if you expect SEO-style proof.
What Changed (and why old models fail)
Traditional analytics depended on a neat chain: query → impression → click → session → goal. Assistants short-circuit this:
Answers over links. The assistant synthesizes. Your page informs the answer, but the user doesn’t click your link—they jump to a new tab and look you up.
Many sources, one decision. Multiple pages shape the response. You can materially influence the outcome without any traceable visit.
Opaque surfaces. Some AI UIs minimize links or strip referrers; even generous interfaces don’t guarantee attribution.
Asynchronous paths. Follow-up questions, device switching, and next-day revisits scatter signals beyond attribution windows.
Expecting PPC/SEO clarity here is like expecting a UTM from word-of-mouth.
Think PR, Not PPC: Triangulation over last-click
AI Relations (our broader approach to earning inclusion and preference—see: AI Relations vs. AEO/GEO) borrows measurement discipline from PR. Instead of one decisive metric, you build confidence from converging signals:
Inclusion — Are assistants naming or citing you for the problems you solve?
Framing — How do they describe you: accurate, favorable, confident—or hedged and off-base?
Freshness — Do crawlers revisit your reference pages quickly after updates?
Corroboration — Do reputable third parties restate your key claims consistently?
Demand lift — Are branded queries and direct sessions rising in the same windows where inclusion/framing improve?
Sales evidence — “How did you hear about us?” and call notes that explicitly mention assistants.
No single number closes the loop. Together, they tell a coherent story.
A Practical Measurement Framework
1) Visibility & Inclusion
Goal: Verify assistants can and do rely on you.
Periodically test representative user questions across major assistants; capture whether your brand appears and in what capacity (mention vs. citation vs. tool invocation).
Snapshot outputs and track changes over time.
Artifact: a lightweight gallery of answers and citation panels for your core scenarios.
2) Crawl & Freshness
Goal: Stay present in the model’s working memory.
Log visits from AI/search crawlers to your open-web reference corpus (specs, FAQs, comparisons, policies).
Track time-to-reflection: days from page update → visible change in assistant outputs.
Artifact: a timeline overlaying content updates, crawler hits, and answer deltas.
3) Corroboration Map
Goal: Reduce model uncertainty via external agreement.
List the claims that matter (pricing logic, integrations, SLAs, security posture).
Map each claim to your page and 2–3 authoritative third-party sources; close contradictions.
Artifact: a claims table with status (“aligned / stale / conflicting”) and owners.
4) Demand-Side Lift
Goal: See market response even without referrers.
Monitor branded search and direct traffic.
Add “ChatGPT / Perplexity / other assistant” to “How did you hear about us?” and mine call transcripts for mentions.
Segment by geo/vertical to match where you’ve emphasized AI inclusion.
Artifact: a simple chart correlating inclusion/framing improvements with branded/direct lift.
5) Incrementality & Causality
Goal: Move beyond correlation with structured tests.
Holdouts: stage reference pages for half your features; ship the rest later; compare lifts.
Geo splits: publish in Region A first, Region B later; analyze deltas.
Message tests: adjust a claim in one place; audit how assistants’ framing changes.
Use Difference-in-Differences or synthetic control where sample sizes allow.
Artifact: before/after plots with intervention dates and confidence bands.
Why You Also Need Qualitative Insight
In classic branding, you can’t look inside your audience’s head. Here, you can—the “audience” is an LLM, and you can read what it thinks in its own words. Quant is necessary, but qualitative audits are your advantage:
Framing reviews: Is the assistant describing your positioning and differentiators correctly? What’s missing?
Comparative reads: How does it explain you vs. your competitors across different buyer intents?
Failure analyses: Where does it hedge, hallucinate, or under-specify—and which gaps in your corpus caused that?
Confidence heuristics: Note language that signals certainty (or uncertainty) and tie it to specific evidence you can strengthen.
SME pass: Have subject-matter experts rate answer usefulness and fidelity on a rubric; attach remediation tasks to low-scoring areas.
These qualitative passes are not “vibes.” They are direct windows into the model’s internal picture of your brand—and a roadmap for what to fix.
A Minimal Instrumentation Stack
Server logs → detect AI/search crawlers and visualize cadence.
Answer snapshots → periodic exports of assistant outputs for representative queries.
Claims registry → your canonical facts with corroboration links and update owners.
Brand-lift board → branded queries, direct sessions, “How heard,” sales mentions.
Change journal → what you updated, when, and why (to explain answer shifts).
Name the dashboard AI Relations to avoid forcing it into SEO boxes.
Metrics That Actually Help
Inclusion Score: coverage × consistency of appearances across scenarios.
Time-to-Reflection (days): median lag from content update to answer change.
Corroboration Index: share of key claims aligned across ≥2 authoritative sources.
AI-Lift Ratio: incremental branded/direct growth during strong-inclusion periods vs. baseline.
Framing Quality: audited share of answers that are accurate and favorable.
You’re not chasing perfection—just signal strong enough to steer.
Common Pitfalls (and fixes)
Chasing “answer position.” Assistants synthesize; optimize substance and corroboration, not cosmetics.
Treating reference like blog copy. Write for citation: unambiguous, structured, exhaustive.
No SLAs for facts. Stale specs/pricing erode trust; assign owners and update windows.
Attribution envy. Don’t force last-click models to explain PR-style impact. Triangulate instead.
Where This Fits
This isn’t AEO/GEO with new buzzwords. AEO/GEO promises SEO-like dashboards; assistants won’t give you those. AI Relations accepts that assistants reward evidence, access, and upkeep. So we measure like PR—by triangulating inclusion, framing, corroboration, and lift—and we act with enough confidence to keep investing.
If your reps are hearing “ChatGPT told me to talk to you,” your attribution is working—even if Analytics never shows “/ref=assistant.”