TUTORIALS

The Complete Guide to AI Attribution

The Complete Guide to AI Attribution

This playbook is for marketing, growth, and revenue operations teams who need to measure AI's effect on their pipeline. It assumes a world in which AI is actively shaping opinions across every channel you run, and increasingly making purchasing decisions directly on behalf of your buyers.

This playbook is for marketing, growth, and revenue operations teams who need to measure AI's effect on their pipeline. It assumes a world in which AI is actively shaping opinions across every channel you run, and increasingly making purchasing decisions directly on behalf of your buyers.

Keller Maloney

Unusual - Founder

Apr 28, 2026

AI is a new audience for brands, and it's affecting every channel. It reads everything you and your competitors have published, forms opinions, and advises your buyers on what to purchase. When buyers delegate the decision, AI makes it directly.

AI's perception of your brand has two components:

Awareness. Does AI know you exist? When your ICP persona asks a question, do AI models even consider you as an option?

Alignment. When AI does consider you, how does it describe your strengths and weaknesses on the dimensions it treats as relevant to that buyer? Those dimensions emerge from training data, reviews, documentation, and competitor positioning — whatever AI has implicitly concluded matters for that persona. Alignment is how AI describes you on those dimensions, as strengths, as weaknesses, or silently.

Mapping what AI believes about you is a research discipline, closer in shape to qualitative customer research than to analytics. Doing that work well at meaningful scale across personas, models, and buying contexts is why we built Unusual. This playbook begins where that perception research ends.

Knowing what AI believes tells you the direction: advocate or detractor on each dimension. The question this playbook answers: how much is AI's perception of your brand actually moving your pipeline, for which personas, and where? The answer comes from triangulating across several signals of varying quality. The rest of the guide walks through why attribution for this audience is harder than for others, the signals worth measuring, the metrics that look like attribution but measure something else, and the review cycle that turns signals into decisions.

Part 1: Why AI attribution is hard

AI behaves as both a channel and a force on every other channel you run. The two require different measurement.

As a channel, AI is easy. Some buyers find you directly through ChatGPT, Claude, or Perplexity and click through with a recognizable UTM. Track those referrals like any other channel and move on.

As a force, AI is much harder. A buyer who arrives through paid search may have spent half an hour in a ChatGPT conversation about you first. Others never arrive at all, because an AI model told them you weren't the right fit. None of it shows up in your channel attribution, and all of it is shaping who enters your pipeline, who converts, and who you quietly lose.

This shape is familiar. Brand and influencer attribution have handled it for decades: a direct channel (branded search, UTM codes) and a broader force that makes every other channel perform better or worse depending on how buyers feel about you. The playbooks that measure those forces are the model for this work.

Part 2: The approach — triangulation

No single signal answers the attribution question cleanly. Every signal is biased, partial, or slow: self-reports are biased, server logs show attention without opinion, cohort tests are directional, branded search is noisy. A framework built on a single metric will mislead.

Triangulation is the approach. Collect several signals of varying causal strength and read them together. When the signals agree, the answer is clear. When they disagree, the disagreement itself is diagnostic: it points to where the picture is incomplete and what to instrument next. A mature setup produces a periodic read, quarterly at minimum, that answers three questions at once: is AI an advocate or a detractor, by how much, and on which dimensions for which personas.

Part 3: The hierarchy of signals

The signals below are ranked by causal strength. The ones at the top are strong enough to act on in isolation. The ones lower down are supporting evidence, useful for confirmation but weaker standing alone. Most teams should run two or three signals well rather than seven badly. Start at the top.

1. The AI-affected cohort

Ask new leads whether they consulted an AI model about you. For self-serve and mid-market segments, the intake form is usually fine. Enterprise buyers might be a little embarrassed to admit they used AI to evaluate you, so hold the question until you've built some trust — a discovery call or a second touchpoint works better than the form. Tag the ones who say yes, and compare that cohort's pipeline performance against the rest of your leads on conversion rate, cycle time, and average contract value.

Higher conversion in the AI-affected cohort means AI is qualifying buyers on your behalf. Lower conversion means AI is disqualifying them in conversations you will never see. Cross reference this with perception research (from the overview) to understand why.

The shape of the cohort's funnel tells you more than the headline does. If AI-affected leads book demos at the same rate but close twenty percent slower, AI is acting as a research layer that extends consideration. A faster close at lower contract value points to AI pre-qualifying in ways that cap deal size. If the cohort shows up fine but loses at proposal more often than everyone else, a specific claim AI is making about you is surviving into late-stage conversations.

2. Close-reason cross-reference

The strongest causal signal in the playbook. Take what AI says about you (the strengths, the weaknesses, the dimensions it treats as important for your personas) and line it up against the reasons your buyers cite for winning or losing deals.

When buyers cite strengths on dimensions AI also describes favorably, AI is reinforcing what's working for you. When buyers cite weaknesses on dimensions AI warns them about, AI is actively costing you deals. And when buyers cite dimensions AI hasn't picked up at all, you have a narrative opportunity: the dimension matters to the buyer, and your side of the story hasn't propagated yet.

Running this well requires two things that most teams already have half of. The first is structured close reasons in your CRM, tagged to specific attributes rather than a single free-text field. The second is a regular read on what AI currently believes about you across your personas — the belief-level research described in the overview. Unusual is what we built for that side; the move is available to any team willing to instrument both halves.

3. Objection patterns

Your sales calls are a continuous attribution dataset. When a prospect says "I read that you don't integrate with Salesforce," or "I heard your pricing is usage-based," or "someone told me you're SMB-only," they're reporting a belief about you that came from outside your controlled channels. Capture those statements as they happen, in whatever system your sales team already uses.

When the same belief starts appearing across unrelated prospects who didn't come from the same source, you're watching a narrative spread through the market. Increasingly, AI is the proximate origin. Cross-referencing the recurring objections against what AI currently believes about you tells you whether the belief is AI-propagated and needs direct countering, or whether it's spreading through other channels and wants a different response.

4. Bot crawl logs

When a buyer asks ChatGPT a question about you, ChatGPT's agent sometimes visits your site in real time to answer them. When that happens, your server logs record it with a specific user-agent string (ChatGPT-User, Claude-User, and the like) that identifies the visit as AI-driven. Filtering your logs for these user-agents tells you when AI is paying attention to you on behalf of live buyers, and which of your pages it visits to answer them.

Volume tells you a general trend: rising live-user traffic means AI is considering you in more buyer conversations this month than last. Concentration tells you more. If those live visits keep landing on your integrations page, buyers are asking AI about your integrations. If they land on a competitor comparison post you wrote, buyers are researching that competitor and you are the nearest authoritative source on them.

Attention on its own does not settle the advocate-or-detractor question. It tells you where the conversation is happening, which is a precondition for influence in either direction.

This signal will only grow in weight. As agents start acting on their users' behalf — booking demos, starting trials, eventually purchasing — they become customers in their own right. Once an agent can book a demo on its own, agent conversion rate is a pipeline metric you track directly.

5. Branded search lift

When AI mentions you favorably in a conversation that doesn't produce a click, some fraction of those users will type your name into Google a minute later. Branded search volume is the shadow of AI conversations you can't observe directly.

Two things make it useful. Branded search moves fast (increases in AI mentions show up in Search Console within days), and the data is already being collected for free. When you ship a content investment, publish a case study, or close an alignment gap, watch branded search for the next two weeks. When your live AI traffic spikes, cross-reference with the same window. The correlations are imperfect, and the patterns are usually visible enough to trust at the monthly level.

6. Lost-deal interviews

Retrospective conversations with lost prospects produce the cleanest qualitative attribution available. During an active deal, buyers have reasons to hide where their views came from. After the deal is closed-lost, those reasons disappear. Buyers describe the specific perceptions that drove them to a competitor, and they often name AI as the source.

The data is qualitative and slow, maybe five to fifteen interviews per quarter for most mid-market B2B companies. The output is a named list of perceptions costing you deals. Feed the list directly into the close-reason cross-reference, and use it to prioritize the positioning and proof work that comes out of the quarterly review.

7. UTMs and self-report

These are the signals most teams track and the signals most teams should rely on least. utm_source=chatgpt, utm_source=claude, and the lead-form question "where did you hear about us?" capture the small slice of buyers who clicked through from an AI surface or who self-identify AI as their discovery source.

Both measure discovery only. They undercount even that, because most AI-influenced buyers never click through from an AI surface. The conversation is the influence, and it happens outside your analytics. Keep these signals as a floor estimate of AI as a channel and a backdrop for the stronger signals above. The UTM conversion rate is biased toward buyers already convinced enough to click, so don't draw strong conclusions from it on its own.

Part 4: Signals to discount

Share of voice / mention rate

Share of voice is the fraction of AI answers that mention your brand across a hand-picked set of prompts. The problem is the prompts. They're picked by you or your vendor, not by your buyers. Tiny word changes ("best" to "top") swing the number materially. Real buyers ask questions in their own phrasings, in long conversations, with context about themselves the model has that you cannot see. Optimizing against a fixed prompt set improves your score on that prompt set and generally fails to generalize to the conversations that matter. A longer treatment is in Your AI "Share of Voice" is Meaningless.

Opaque brand scores

A handful of AEO tools roll several underlying metrics into a single composite score, framed as a one-number read on your AI presence. The problem is opacity. If you can't explain what the number measures in terms of buyer behavior or AI belief, you can't optimize against it honestly. You can only move it in whatever direction the tool's hidden weights happen to reward. Any composite score that arrives without a transparent explanation of its inputs and weights is telling you more about the tool than about your brand.

Any rate metrics based on prompt tracking

A close cousin of share of voice. Any metric computed as "your brand's rate of X across our prompt set" (citation rate, inclusion rate, recommendation rate) inherits the same problem. The number is hypersensitive to the prompts the vendor picked. When the vendor updates its prompt set (and they do, silently), your historical numbers become incomparable. When the underlying models update (and they do constantly), your numbers move for reasons that have nothing to do with your brand. These metrics work as narrow activity indicators within the frame the vendor defined. They have no place in a pipeline-attribution framework.

Part 5: Running the playbook

The setup work is one-time and mostly reuses data you already have. Log-based signals come from standard server-access logging, filtered for AI user-agents. Branded search is already in Search Console. UTMs and the lead-form question take an afternoon to add. Close reasons and objection tracking require discipline from your sales team more than new software. The hardest piece to stand up is the perception side: regular, structured reads on what AI believes about you for your core personas. Start as the overview describes, and automate it once the volume makes that worthwhile.

Once the signals are running, the work is a periodic review. Quarterly is the minimum, more often if your data volume supports it. The review has three moves:

  1. Pull the current read on each signal in the hierarchy.

  2. Cross-reference what AI believes about you with what your buyers are citing, in won and lost deals, in objections, and in lost-deal interviews.

  3. Synthesize: is AI an advocate or a detractor right now, for which personas, on which dimensions, by how much?

The output of every review is a named list of perceptions worth moving in the next cycle and the positioning, content, or proof work to move them. That's the whole loop. It's the same shape brand marketers have always used to shift an audience's view of a company, applied to an audience that happens to be everywhere and to be talking directly to your buyers.

What comes next

AI is an audience your brand can see and measure now. The tools to track what it thinks of you and to attribute its effect on your pipeline both exist. The playbook's job is to put the signals in one place, set their cadence, and produce an honest quarterly answer to the question this guide opens with: how much is AI's perception of your brand actually moving your pipeline, for which personas, and where?

Once you have that answer, the work that comes next is the work brand marketers have always done. Positioning, proof, content, and time, aimed at moving what an audience believes about you. What's changed is that the audience is now everywhere your buyers are, answering their questions in real time, and it is one you can finally measure.