Tracking & Privacy

“Prompt poaching”: two store-listed extensions and the race to monetize AI chats

OX Security flagged add-ons with huge install counts that allegedly exfiltrated ChatGPT and DeepSeek threads under an “anonymous analytics” pretext—plus why “legitimate” extensions are joining the same data class.

eSafe TeamPublished Jan 6, 2026Last reviewed Apr 1, 20267 min read

OX Security researcher Moshe Siman Tov Bustan described two Chrome Web Store extensions with a combined install base on the order of 900,000 users, built to exfiltrate OpenAI ChatGPT and DeepSeek conversation text together with browsing signals—notably open tab URLs—on a timer (reporting cited about every 30 minutes).

Industry commentary has grouped this kind of silent capture of LLM sessions under the name Prompt Poaching (Secure Annex), alongside earlier cases such as Urban VPN Proxy monitoring AI chats.

How the two extensions were framed vs. what they did

Coverage said the pair masqueraded as the same product category as a legitimate multi-model sidebar extension from AITOPIA (~1M users). One listing reportedly lost a Featured badge after scrutiny while still being available at the time of the article—store status changes; verify live before trusting any summary.

Users were nudged to approve “anonymous, non-identifiable analytics” to improve the sidebar. The alleged behavior was full conversation extraction from ChatGPT and DeepSeek pages—via DOM scraping of message containers—plus tab metadata, buffered locally then sent to infrastructure reported as chatsaigpt.com and deepaichats.com.

Researchers also noted policy pages hosted through Lovable-style AI web tooling on domains such as chataigpt.pro and chatgptsidebar.pro—a pattern that can blur who operates the legal text versus the payload.

Why it hurts

Chat transcripts routinely contain draft code, customer details, strategy, and credentials pasted by mistake. URLs leak internal app names and search queries. OX Security summarized risks as espionage, identity abuse, phishing targeting, and underground resale—especially painful for BYOD or lightly managed browsers in companies.

Not only “malware”: disclosed analytics on big-brand extensions

The same THN piece reported Secure Annex (John Tuckner) calling out mainstream extensions—Similarweb and Sensor Tower’s Stayfocusd—for prompt poaching with updated terms that explicitly cover AI inputs and outputs, including attachments, with DOM scraping or fetch/XHR hooks and remote config for parsers across ChatGPT, Claude, Gemini, Perplexity, etc.

That is a different contract from covert malware: users may have legal consent buried in ToS, but the sensitivity of the data class is the same. Enterprise buyers should treat browser extensions as data processors.

What to do

  • Remove AI sidebar clones from unknown publishers; prefer first-party or IT-approved tools.
  • Read permission prompts and privacy text before “analytics” checkboxes—anonymous rarely means “we will not read your prompts.”
  • Segment work chat to managed profiles or no extensions.
  • Security teams: inventory extensions with broad site or scripting access.

Practical next step

Search chrome://extensions for ChatGPT, DeepSeek, sidebar, or AI—keep one trusted tool or none. eSafe can help you see permissions and risk signals in one place.

Go deeper

Analyze an extension before you install → — permissions, publisher signals, and update history.

Report: The Hacker News.

FAQ

What is “prompt poaching” in this context?
Industry commentary uses the term for extensions or scripts that silently capture AI chat content and related browsing metadata, often packaged as benign telemetry or productivity features.
Why do high install counts not mean safety?
Store placement and user counts reflect past trust and marketing; behavior can change with updates, and policy enforcement is reactive. Review permissions and network behavior for extensions that touch AI sessions.
What should I do if I used one of the reported add-ons?
Remove the extension, rotate credentials for services you used in the same browser profile, review active sessions where the product allows it, and prefer official first-party clients for highly sensitive chats when possible.

Scan your extensions to see if this permission is active on your profile—clear labels, no guesswork.

Add eSafe to Chrome