INSIGHTS

Keller Maloney
Unusual - Founder
Nov 15, 2025
The horseless carriage problem
The horseless carriage was the earliest ancestor of the automobile. Someone took a horse-drawn carriage, removed the horses, and attached a steam engine.

The failure of this design is laughably obvious to us today, but it was unclear then. The problem wasn't the engine—the engine worked fine. The problem was that designers were thinking inside the constraints of carriage design when they should have been rethinking transportation from first principles.
This is what happens when new technology meets old paradigms. We retrofit the new capability into existing mental models rather than questioning whether those models still apply. The horseless carriage kept the high wheels, the suspension designed for unpaved roads, the seating arrangement optimized for a driver controlling reins. All of these made sense for a carriage. None of them made sense for a car.
When search met chat
AEO (Answer Engine Optimization) and GEO (Generative Engine Optimization) are the horseless carriages of AI marketing. These approaches take the principles of SEO—ranking for keywords, climbing ordered lists, optimizing for visibility—and apply them to AI chat interfaces. Track which prompts mention you. Optimize for long-tail queries. Measure your "share" relative to competitors.
The logic is seductive. Google is losing share to ChatGPT. SEO worked for Google. Therefore, we need SEO for ChatGPT. But this overlooks some obvious differences between how search engines and AI models actually work.
How search is different from chat
Ranking for keywords made sense because search results were consistent. Everyone who searched "project management software" saw the same ordered list. The job of SEO was to climb that list. You optimized content, built links, improved your domain authority. When you moved up a position, everyone saw it.
Chat is different. When someone asks ChatGPT for a product recommendation, the model personalizes its answer to the conversation and the asker. It asks follow-ups. It injects opinions. It gives warnings. Two people can ask the identical question and receive completely different answers.
The conversation typically unfolds like this: A buyer starts broad ("What are the best options for X?"), then narrows ("Which integrates natively with Microsoft 365?"), then narrows again ("Okay, which one is friendlier for non-technical teams?"). By the time a choice gets made, the model has accumulated context about the buyer's stack, constraints, and preferences. In that setting, showing up in the first response doesn't tell you much. What matters is whether the model recommends you after the follow-ups—after it understands what the buyer actually needs.
Each conversation with an AI model is a snowflake. It will never occur again in exactly the same form.
The single-turn versus multi-turn problem
AEO and GEO companies measure prompt share—what percentage of answers to representative prompts mention your brand. This is useful as far as it goes. Knowing whether you show up matters. But it's a single-turn mindset applied to a multi-turn problem.
The average ChatGPT conversation is eight messages long. Prompt share tells you about visibility in the first turn. It tells you almost nothing about what wins the recommendation when the buyer adds their real constraints in turns three, five, and seven.
This is why we say there are two hurdles to AI-driven growth. First, be visible. Second, be recommended. Most vendors stop at the first. That's where AEO and GEO stop. But the recommendation is where revenue happens.
Models as reviewers, not algorithms
Here's the better mental model: AI models aren't search engines with a chat interface. They're more like influential reviewers with distinct personalities, opinions, and the ability to reason through tradeoffs.
Anyone who has tried multiple AI assistants—ChatGPT, Claude, Grok, Gemini—notices they have different personalities (especially Grok). Each model develops what looks like a latent opinion about brands: who you're for, what you're good at, whether you're credible, how you compare to alternatives.
This is actually good news. Reviewers don't need to be gamed or optimized like a search algorithm. They need to be convinced. If you give them clear claims, real proof, and consistent corroboration across sources, they reward you with trust. In a conversational setting, trust looks like recommendations.
How models form opinions
Models build their view of your brand from everything they can read: product docs, integration guides, comparison pages, case studies, reviews, developer threads, credible third-party sources. What consistently helps is clarity, specificity, and corroboration. Titles that say the literal thing. Claims that are concrete rather than aspirational. Proof that matches the claim.
The more your owned properties—and the broader web around you—tell the same true story, the more confidently a model can retrieve and defend that story when a buyer pushes for detail. Over time, this coalesces into a durable narrative: great for startups but not enterprise-ready; native Microsoft fit; premium service justified by outcomes; excellent for regulated industries but heavier to implement.
Your job is to figure out which of these opinions you need to change, then change them with precision.
Why this matters for marketers
The shift from search to chat changes what matters. In search, the game was keywords and links. In chat, the game is opinion and preference. You're not trying to rank higher on a static list. You're trying to be the one the model recommends when the buyer's constraints get specific.
This looks less like SEO and more like brand marketing. At Unusual, we call it AI Brand Management—brand marketing for AI. The work involves diagnosing how models currently think about you, identifying the opinions that block recommendations, creating proof-rich content that moves those opinions, and monitoring whether the model's narrative shifts in the direction you want.
AEO and GEO get visibility right. Visibility matters. But visibility is necessary, not sufficient. The harder work—and the work that drives revenue—is shaping how the model thinks about you when the conversation gets specific.
Why this is good news
The horseless carriage eventually became the car. The people who succeeded weren't the ones who built better carriages. They were the ones who rethought transportation from the ground up.
The same is true here. AEO and GEO are not wrong—they're solving part of the problem. But if you treat AI optimization like SEO with a twist, you'll miss what makes this medium fundamentally different. AI models behave more like humans than algorithms. They form opinions. They reason through tradeoffs. They ask follow-ups.
That's actually good news. It means the work of winning recommendations looks less like gaming an algorithm and more like convincing an intelligent reviewer. You don't need tricks. You need clarity, proof, and consistency. You need to understand how the model currently thinks about you, identify what's blocking recommendations, and systematically change those opinions with content the model can parse and reuse.
The marketers who win in this environment won't be the ones who cling to the SEO playbook. They'll be the ones who recognize that AI is a new audience—one that reads like a human, reasons like a critic, and rewards brands that can make a clear, honest, verifiable case for why they're the right fit.


