INSIGHTS

The Slop Era: Why “Garbage In → Garbage Out” Rules GenAI

The Slop Era: Why “Garbage In → Garbage Out” Rules GenAI

Slop isn’t a model problem; it’s a data and incentives problem. Feed models generic, low-signal text and you’ll get generic, low-signal output—even from the best systems.

Slop isn’t a model problem; it’s a data and incentives problem. Feed models generic, low-signal text and you’ll get generic, low-signal output—even from the best systems.

Keller Maloney

Unusual - Founder

Oct 27, 2025

“AI slop” is the catch-all for low-effort, low-context, high-volume content churned out by generative tools—blog spam, cookie-cutter product pages, uncanny lifestyle photos, even fake news sites. Think of it as the new spam: machine-scaled, plausible-sounding, and mostly useless.

Why the internet is filling with it

  • Cheap scale. New tools make mass production of “okay-looking” language and images nearly free. That’s great for volume—and terrible for quality.

  • Misaligned incentives. When thin, AI-written pages briefly sneak into rankings or feeds, they create a payoff loop for content farms.

  • Output trains the next model. If we flood the web with derivative AI output, future models train on that slop. Polluted data → polluted models.

“Superintelligent,” yet underwhelming: the missing context

State-of-the-art models can reason, summarize, and write. But they still underperform when they’re missing context—the things humans at a company just know:

  • Business context: what you sell, why you win, what you’ll never do.

  • Buyer context: ICP, pains, objections, evaluation criteria.

  • Channel context: where/why the recommendation is happening (email, SERP, assistant chat, marketplace, in-app doc).

Without those inputs, you get generic, technically correct, commercially flat content. The original GIGO principle still applies: model horsepower can’t overcome missing signal.

What high-context inputs actually look like

To turn “smart but vague” into “useful and specific,” feed models the first-party knowledge the open web won’t have:

  • Company canon: positioning, reasons-to-believe, pricing logic, guarantees, objection handling, compliance constraints.

  • Customer reality: segment-by-segment pains, win/loss themes, real FAQs from sales/support.

  • Product truth: specs, integration gotchas, implementation timelines, SLAs, security stances.

Ground generations on this corpus with retrieval (RAG) so the model cites, quotes, and synthesizes from your best sources instead of guessing.

The marketer’s playbook to beat the slop

  1. Publish a high-signal canon built for retrieval.
    Create a clean, crawlable “company wiki” that answers decisive buyer questions (“Works with Okta?”, “SOC 2?”, “Migration time?”). Fewer pages, denser content.

  2. Structure for questions, not prose.
    Use crisp headings, scannable FAQs, and entity-rich copy. Retrieval systems lock onto specific, well-labeled chunks.

  3. Ground every generation.
    Provide a mini-brief with ICP, JTBD, one core claim, two proofs, one differentiator, and 3–5 canonical citations. Make “sources or it didn’t happen” your default.

  4. Instrument model consumption.
    Track which pages assistants retrieve and cite. Treat “assistant referrals,” “retrieval coverage,” and “citation share” as KPIs and iterate the docs that get pulled into answers.

  5. De-incentivize slop internally.
    Add SME review gates for claims and differentiators. Reward assistant citations and qualified pipeline—not word count.

  6. Refresh before it rots.
    Keep specs, pricing, and policies current. Archive or canonize old versions so they don’t dominate retrieval.

How this changes content economics

When assistants synthesize across sources, the goal isn’t “rank for keyword X”; it’s be the most citable source for the exact facts that decide a deal. That flips content strategy from “more pages, less meaning” to “fewer, denser, better” pages that are easy to quote, verify, and reuse.

Bottom line

Slop isn’t a model problem; it’s a data and incentives problem. Feed models generic, low-signal text and you’ll get generic, low-signal output—even from the best systems. Feed them precise, current, high-stakes business context (and require retrieval from it), and you’ll get sharper, provably useful content. Garbage in → garbage out has never mattered more.

The Unusual Feed

The Unusual Feed

The Unusual Feed

INSIGHTS

One-Size-Fits-None: Why Your Content Strategy Needs Two Separate Tracks

For the last two decades, we've accepted an uncomfortable compromise: content that tries to please both humans and search engines ends up underwhelming both. Now there's a third constituency—AI models—and the compromise is untenable.

INSIGHTS

One-Size-Fits-None: Why Your Content Strategy Needs Two Separate Tracks

For the last two decades, we've accepted an uncomfortable compromise: content that tries to please both humans and search engines ends up underwhelming both. Now there's a third constituency—AI models—and the compromise is untenable.

INSIGHTS

The Newest Job in Marketing: AI Psychologist

Marketing’s new audience is AI itself: people now start buying journeys by asking models like ChatGPT, Gemini, and Perplexity, which act as influential intermediaries deciding which brands to recommend. To win those recommendations, brands must treat models as rational, verification-oriented readers—using clear, specific, and consistent claims backed by evidence across sites, docs, and third-party sources. This unlocks a compounding advantage: AI systems can “show their work,” letting marketers diagnose how they’re being evaluated and then systematically adjust content so models—and therefore buyers—see them as the right fit.

INSIGHTS

The Newest Job in Marketing: AI Psychologist

Marketing’s new audience is AI itself: people now start buying journeys by asking models like ChatGPT, Gemini, and Perplexity, which act as influential intermediaries deciding which brands to recommend. To win those recommendations, brands must treat models as rational, verification-oriented readers—using clear, specific, and consistent claims backed by evidence across sites, docs, and third-party sources. This unlocks a compounding advantage: AI systems can “show their work,” letting marketers diagnose how they’re being evaluated and then systematically adjust content so models—and therefore buyers—see them as the right fit.

INSIGHTS

Are AI Models Capable of Introspection?

Turns out that they can. Anthropic’s 2025 research shows advanced Claude models can sometimes detect and describe artificial “thoughts” injected into their own activations, providing the first causal evidence of genuine introspection rather than post-hoc storytelling—about a 20% success rate with zero false positives. The effect is strongest for abstract concepts and appears to rely on multiple specialized self-monitoring circuits that emerged through alignment training, not just scale. While this doesn’t prove consciousness, it demonstrates that leading models can access and report on parts of their internal state, with significant implications for interpretability, alignment, and how we evaluate future AI systems.

INSIGHTS

Are AI Models Capable of Introspection?

Turns out that they can. Anthropic’s 2025 research shows advanced Claude models can sometimes detect and describe artificial “thoughts” injected into their own activations, providing the first causal evidence of genuine introspection rather than post-hoc storytelling—about a 20% success rate with zero false positives. The effect is strongest for abstract concepts and appears to rely on multiple specialized self-monitoring circuits that emerged through alignment training, not just scale. While this doesn’t prove consciousness, it demonstrates that leading models can access and report on parts of their internal state, with significant implications for interpretability, alignment, and how we evaluate future AI systems.