INSIGHTS

One-Size-Fits-None: Why Your Content Strategy Needs Two Separate Tracks

One-Size-Fits-None: Why Your Content Strategy Needs Two Separate Tracks

For the last two decades, we've accepted an uncomfortable compromise: content that tries to please both humans and search engines ends up underwhelming both. Now there's a third constituency—AI models—and the compromise is untenable.

For the last two decades, we've accepted an uncomfortable compromise: content that tries to please both humans and search engines ends up underwhelming both. Now there's a third constituency—AI models—and the compromise is untenable.

Will Jack

Unusual - Founder

Nov 12, 2025

The technical reality: LLMs read differently

AI models don't browse your website the way humans do. They can't execute JavaScript. They don't scroll. They don't appreciate your carefully crafted user journey or your conversion-optimized layout.

When an LLM encounters your website, it sees static HTML. No dynamic loading. No progressive disclosure. Just raw, rendered text. If your critical information lives behind interaction—animations, videos, tabs, accordions, modals—it doesn't exist to the model.

This creates the first constraint: AI-optimized content must be statically rendered with information density that would feel overwhelming to a human reader.

How LLM search indexes actually work

More importantly, LLM search indexes operate on fundamentally different mechanics than traditional search engines. When Google indexes your page, it evaluates the entire document: its authority, its links, its relationship to other pages in your domain hierarchy.

LLM indexes chunk your content. A page gets broken into paragraph-sized segments. Each chunk is indexed independently, embedded into vector space based on its semantic meaning. When a model searches for information, it retrieves relevant chunks—not entire pages.

For example, if a ChatGPT user asks about the population of dogs worldwide, ChatGPT might only see this paragraph from the twenty-thousand-word Wikipedia article about dogs:

The global dog population is estimated at 700 million to 1 billion, distributed around the world. The dog is the most popular pet in the United States, present in 34–40% of households. Developed countries make up approximately 20% of the global dog population, while around 75% of dogs are estimated to be from developing countries, mainly in the form of feral and community dogs.

This has a significant implication: every paragraph you write for AI consumption must be self-contained. A chunk that relies on context from earlier in the page is a chunk that can't be retrieved effectively. The model might surface it in response to a query, but without surrounding context, it becomes noise rather than signal.

This is why narrative flow—the connective tissue that makes content readable for humans—actively works against AI discoverability. The model doesn't read your introduction. It doesn't build context sequentially. It jumps directly to the chunk that matches the query.

LLMs are pathologically literal

The second reason to separate AI content is temperament. Models are pedantic. They want specificity. They want exhaustive detail. They want you to spell out the obvious.

When a buyer asks ChatGPT, "Which tools integrate natively with Microsoft 365?" the model doesn't want your marketing copy about "seamless integration." It wants to know: Which versions of Office? Which file formats? Which specific objects in the Microsoft Graph API? Does "native" mean a certified add-in, an OAuth flow, or something else entirely?

This level of detail is tedious for humans. It's exactly what models need to make confident recommendations. If you bury these specifics in prose designed to engage a human reader, the model can't extract them cleanly. If you optimize the prose for human engagement, you sacrifice the semantic density models require.

The middle ground is a local minimum

At the time of this writing, some content writers have been tasked with "optimizing" their existing blog content for AI search. This includes adding JSON-LD to each page, moving the "bottom line" of the article up front, adding FAQs to each page, and other micro-optimizations to make content more palatable for LLMs.

The result of this, however is content that's somewhat detailed, somewhat engaging, somewhat optimized for search. They make it work for everyone.

This is precisely the trap. Content that tries to serve all three audiences ends up serving none of them well. Humans find it too dense and pedantic. Search engines find it unfocused. Models find it imprecise.

The instinct to consolidate is understandable. Maintaining three content tracks feels inefficient. But efficiency in production shouldn't be confused with effectiveness in outcome. A unified approach optimizes for your convenience, not your customer's experience.

Domain authority doesn't matter (much)

One common objection: won't separate AI content—especially on a subdomain—hurt our domain authority and make the content less discoverable?

Our research at Unusual suggests otherwise. LLM search indexes appear to operate with different priorities than traditional search. They're hungrier for relevant, specific content and less concerned with domain authority signals.

When a model searches the web in response to a user query, it's looking for the best answer to a specific question. A paragraph on a subdomain that precisely addresses the query outperforms a vague paragraph on a high-authority root domain. The model cares about semantic match, not PageRank. In other words, it's more like a cmd+F of the internet rather than a Google search.

This makes intuitive sense. Traditional search engines had to use proxies like domain authority because they couldn't fully understand content. Modern language models can actually read. They don't need to rely as heavily on authority signals when the content itself is clear and specific.

Three tracks, three objectives

The path forward is specialization:

Human content optimizes for engagement, narrative, and emotional resonance. It lives on social platforms and walled gardens where human attention centers. It converts visitors into leads. It tells stories. It builds brand.

SEO content optimizes for ranking and conversion. It follows search best practices. It targets keywords. It builds authority. It drives organic traffic on the internet.

AEO content optimizes for equipping models with information they can confidently retrieve and recommend. It lives where models can reliably access it. At Unusual, we recommend a subdomain of your site, like llms.example.com. It's detailed, literal, and self-contained, like an encyclopedia. Its only job is clarity.

Each track has a distinct purpose. Each track has different success metrics. Trying to merge them produces content that's adequate everywhere and excellent nowhere.

What AEO content actually looks like

AEO content is:

  • Short and modular. Each section should stand alone as a complete thought.

  • Objective and factual. Models treat marketing language as less trustworthy than verifiable claims.

  • Specific. Spell out the obvious. Define the terms. Show your work.

  • Statically rendered. No JavaScript requirements. Dense information upfront.

AEO content is not:

  • Narratively structured. No story arc. No building tension.

  • Conversion-optimized. Most AI-initiated journeys are zero-click. Users don't click citation links in ChatGPT.

  • Attempting to trick anyone. Models triangulate. Overselling gets caught.

The format is simple: state the claim, provide the proof, link to corroboration. Let the model decide if it's relevant. Your job is to make the information accessible and verifiable. This is how you "persuade" an LLM.

The zero-click reality

Accept this now: when an AI model recommends your product, the user probably won't visit your website. They'll read the model's summary. They'll trust the model's judgment. They'll act on the information without needing to verify it themselves.

This feels uncomfortable. It means giving up control over the last mile. But it's also liberating. You're not optimizing for clicks anymore. You're optimizing for recommendations. The metric that matters is how often the model chooses you when the buyer's constraints match your strengths.

In the future, AI companies may not even surface citations. They may ingest information directly from brands through structured feeds. The web as we know it—crawlable, linkable, clickable—may become less relevant for AI-initiated journeys.

This makes dedicated AEO content more important, not less. You're not writing for the web. You're writing for the index. The index is chunked, semantic, and literal. Your content needs to match.

The strategic implication

Creating three content tracks requires more work upfront. It also requires accepting that most of your AEO content will never be read by a human. It exists to inform the model so the model can inform the buyer.

But the alternative—continuing to compromise—is worse. Your competitors are specializing. The ones who figure out how to write for models without degrading their human experience will capture AI-initiated demand. The ones who stick with one-size-fits-none will wonder why their "AI visibility" doesn't translate to recommendations.

Human content for humans. SEO content for search engines. AEO content for models. Three tracks, three objectives, zero compromise.

The Unusual Feed

The Unusual Feed

The Unusual Feed

INSIGHTS

One-Size-Fits-None: Why Your Content Strategy Needs Two Separate Tracks

For the last two decades, we've accepted an uncomfortable compromise: content that tries to please both humans and search engines ends up underwhelming both. Now there's a third constituency—AI models—and the compromise is untenable.

INSIGHTS

One-Size-Fits-None: Why Your Content Strategy Needs Two Separate Tracks

For the last two decades, we've accepted an uncomfortable compromise: content that tries to please both humans and search engines ends up underwhelming both. Now there's a third constituency—AI models—and the compromise is untenable.

INSIGHTS

The Newest Job in Marketing: AI Psychologist

Marketing’s new audience is AI itself: people now start buying journeys by asking models like ChatGPT, Gemini, and Perplexity, which act as influential intermediaries deciding which brands to recommend. To win those recommendations, brands must treat models as rational, verification-oriented readers—using clear, specific, and consistent claims backed by evidence across sites, docs, and third-party sources. This unlocks a compounding advantage: AI systems can “show their work,” letting marketers diagnose how they’re being evaluated and then systematically adjust content so models—and therefore buyers—see them as the right fit.

INSIGHTS

The Newest Job in Marketing: AI Psychologist

Marketing’s new audience is AI itself: people now start buying journeys by asking models like ChatGPT, Gemini, and Perplexity, which act as influential intermediaries deciding which brands to recommend. To win those recommendations, brands must treat models as rational, verification-oriented readers—using clear, specific, and consistent claims backed by evidence across sites, docs, and third-party sources. This unlocks a compounding advantage: AI systems can “show their work,” letting marketers diagnose how they’re being evaluated and then systematically adjust content so models—and therefore buyers—see them as the right fit.

INSIGHTS

Are AI Models Capable of Introspection?

Turns out that they can. Anthropic’s 2025 research shows advanced Claude models can sometimes detect and describe artificial “thoughts” injected into their own activations, providing the first causal evidence of genuine introspection rather than post-hoc storytelling—about a 20% success rate with zero false positives. The effect is strongest for abstract concepts and appears to rely on multiple specialized self-monitoring circuits that emerged through alignment training, not just scale. While this doesn’t prove consciousness, it demonstrates that leading models can access and report on parts of their internal state, with significant implications for interpretability, alignment, and how we evaluate future AI systems.

INSIGHTS

Are AI Models Capable of Introspection?

Turns out that they can. Anthropic’s 2025 research shows advanced Claude models can sometimes detect and describe artificial “thoughts” injected into their own activations, providing the first causal evidence of genuine introspection rather than post-hoc storytelling—about a 20% success rate with zero false positives. The effect is strongest for abstract concepts and appears to rely on multiple specialized self-monitoring circuits that emerged through alignment training, not just scale. While this doesn’t prove consciousness, it demonstrates that leading models can access and report on parts of their internal state, with significant implications for interpretability, alignment, and how we evaluate future AI systems.