UPDATES

Keller Maloney
Unusual - Founder
Dec 8, 2025
What the Shopping Assistant reveals
The Shopping Assistant works like this. A buyer asks for a product—running shoes, wireless earbuds, a coffee maker. The model conducts research across multiple sources: reviews, product pages, comparison sites, user discussions. It synthesizes what it learns into a landscape overview. Then it asks clarifying questions. What's your budget? What features matter most? Any specific requirements or constraints?
As the buyer answers, the model narrows the field. It doesn't just filter by features—it reasons about fit. "These are better for wide feet based on user reviews." "This brand tends to run small according to multiple sources." "This model doesn't breathe well in hot weather per customer feedback." Eventually, after several exchanges, it makes a single recommendation with reasoning.
This isn't search with a chat interface. It's a research assistant conducting investigation, asking follow-ups, developing opinions, and making a single confident suggestion. The interface makes the entire process visible in a way that normal ChatGPT conversations don't always show.
The multi-turn pattern
The demos show the exact pattern we've been describing. Buyer starts broad: "What are good running shoes?" Model provides landscape: "Several options depending on your needs—road running versus trail, neutral versus stability, daily trainer versus race day." Buyer adds constraint: "I need stability shoes for road running." Model pivots, narrows field: "For stability, these three brands stand out. Do you prioritize cushioning or responsiveness?" Buyer adds another constraint: "Cushioning, and I have wide feet." Model recommends winner: "Based on your requirements, the Brooks Adrenaline GTS. It's known for excellent stability, plush cushioning, and comes in wide sizes. Users with similar needs report it works well."
This is what we mean when we say the recommendation happens on message six, not message one. If you optimized to appear in the first response to "What are good running shoes?", you didn't win anything meaningful. The model is going to accumulate context and constraints before recommending. Your job isn't to rank for that initial prompt—it's to be the best answer when the buyer's actual requirements become clear.
The conversational depth varies, but the pattern holds. Sometimes it's three messages. Sometimes it's ten. But there's always context accumulation, constraint gathering, and then a single recommendation based on fit. This is fundamentally different from returning a ranked list.
The preference layer
The Shopping Assistant doesn't just list options—it forms opinions. "These are better for wide feet." "This brand tends to run small." "Users report this material doesn't breathe well." "This is the best value in its category." "This option is premium but worth it for serious athletes."
These aren't rankings. They're qualitative assessments based on information the model synthesized from multiple sources. This is what we mean by AI models as influencers rather than indexes. The model has preferences, expresses judgments, and makes recommendations like a human advisor would.
The mechanism for forming these preferences is exactly what we've been describing. The model reads product descriptions, user reviews, comparison articles, forum discussions, and technical specifications. It triangulates this information to form a coherent view. Then it expresses that view in language that sounds like advice from someone who's done the research.
When the model says "This is better for wide feet," it's not reporting a search ranking. It's conveying a synthesis of user reviews, product specifications, and comparison data. The confidence in that statement depends on how much corroborating information the model found and how consistent it was across sources. This is exactly why coherent, proof-rich content matters.
What this means for B2B
The Shopping Assistant focuses on consumer products, but the mechanics translate directly to B2B buying. When someone asks ChatGPT for project management tools, CRM systems, or legal software, the model is doing the exact same thing.
Start broad, get landscape. Add constraint about team size. Model narrows field. Add constraint about integration requirements. Model narrows further. Add constraint about compliance needs. Model recommends winner with explanation of why it fits.
The B2B version is often more complex because buying criteria are more sophisticated. Team composition, existing stack, industry requirements, compliance needs, budget approval processes, implementation capacity. But the fundamental pattern is identical: context accumulation through conversation, synthesis of research, and recommendation based on fit.
We see this in the conversations our customers' buyers have before reaching out. They've already talked to ChatGPT or Claude. They've already gotten a recommendation. They're arriving with context and often with their decision mostly made. The AI conversation shaped their shortlist before they ever visited your website.
Why presence in research sources matters
The Shopping Assistant demos show the model searching the web during conversations. You can watch it pull from Amazon reviews, Wirecutter articles, Reddit threads, manufacturer specs, and comparison sites. It's not relying on a pre-built index. It's conducting real research in real-time.
This validates the "long-tail earned media" strategy. You need to be present in the scattered places models look, not just ranked on your owned pages. A mention in a credible Reddit thread about "best running shoes for flat feet" matters because models might find and use that thread when a buyer describes similar needs.
The breadth of sources the model consults is striking. It's not just reading the top three Google results. It's triangulating across dozens of sources to build a comprehensive view. This means your presence needs to be distributed, not concentrated. Better to have light mentions across many credible sources than heavy optimization of a single landing page.
For B2B products, the research sources shift but the principle holds. G2 reviews instead of Amazon. Industry blogs instead of Wirecutter. Stack Overflow instead of Reddit. Technical documentation instead of product specs. But the model is still doing distributed research and synthesis.
The implication: optimize for the sixth message
Everything we've been saying about AI marketing is now visible in a consumer product. Models reason through multi-turn conversations. They develop opinions based on synthesized research. They make single recommendations, not ranked lists. They respond to specific constraints that emerge through dialogue.
If you're still thinking about "ranking for prompts," you're fighting the last war. Prompt ranking assumes static queries and consistent results. The Shopping Assistant shows that every conversation is dynamic, personalized, and unique. The question isn't "Where do we rank for this keyword?" but "Are we the best recommendation when this type of buyer adds these constraints?"
This changes the optimization target. You're not trying to appear first for broad queries. You're trying to be the strongest match for specific constraint combinations that matter to your ideal customers. A startup building immigration software doesn't need to rank first for "legal software." They need to be the confident recommendation when someone adds "works inside Microsoft Word" and "used by immigration lawyers."
The Shopping Assistant makes the mechanism explicit. Watch a few demos and you'll see exactly how initial visibility matters less than fit-based recommendation. The products that win aren't necessarily the ones mentioned first. They're the ones that best match the constraints that emerge through conversation.
The future arrived early
The Shopping Assistant is consumer retail today. But the same technology powers ChatGPT's answer to "What CRM should we use?" and Claude's response to "Which legal software fits our needs?" OpenAI just made the pattern visible in a consumer-friendly interface.
Most companies are still optimizing for search—static queries, ranked results, click-through rates. The ones who understand this conversational, constraint-driven, opinion-based recommendation model will win the next decade.
The work isn't mysterious. Build clear, specific, proof-rich content that helps models understand what you're good at, who you're for, and why someone should choose you. Place that content where models can find it. Make sure it's corroborated by third-party sources. Then trust that when a buyer's constraints match your strengths, the model will make the connection.
The Shopping Assistant shows this process working in real-time. The research, the triangulation, the opinion formation, the constraint-matching, the final recommendation. This is how AI marketing works. The companies that adapt their content strategy to serve this process will own recommendation share in their categories. The ones that keep optimizing for keyword rankings will watch their competitors win deals in conversations they never knew were happening.
Careers
Ideas
Legal


