While many users assumed their chats with AI tools would remain private, thousands of ChatGPT conversations have appeared unexpectedly in Google Search Console analytics starting September 2025.
Website owners reviewing their analytics data discovered unusually long search queries, some exceeding 300 characters, containing personal and sensitive information from private ChatGPT interactions.
The exposed conversations weren’t limited to publicly shared chats. Private queries about relationship problems, business strategies, and personal dilemmas appeared in Google Search Console without user knowledge or consent.
Some entries contained identifiable details like names and locations that users never intended to share beyond their AI conversation.
Technical analysis suggests the exposure happened when Google Search Console began logging ChatGPT queries as search terms. This likely occurred when users switched between ChatGPT and Google search, not through a deliberate data-sharing agreement between OpenAI and Google.
ChatGPT queries leaked via browser switches, not through formal OpenAI-Google data partnership.
The issue was first noticed by webmasters who spotted these unusual entries in their analytics reports.
The privacy breach affected thousands of users who had no idea their conversations would be visible to third parties.
Website owners could see sensitive information including business plans and personal details submitted to ChatGPT. Users had no way to remove their leaked chats from these analytics systems.
Jason Packer, owner of Quantable, documented approximately 200 unusual queries in his blog post before collaborating with web optimization consultant Slobodan Mani for further investigation.
In response, OpenAI disabled its “make discoverable” feature for shared chats on August 1, 2025, and began working to remove indexed chat pages from search engines.
Google hasn’t issued an official statement about the exposure. OpenAI emphasized that public sharing was opt-in but acknowledged user confusion about privacy settings.
Most users were unaware their ChatGPT prompts could appear in Google analytics.
The opt-in feature for sharing was easily missed, especially on mobile devices where the privacy toggle wasn’t visible.
No specific consent was requested for analytics exposure.
The incident highlights growing concerns about AI chat privacy. Security professionals have stressed the importance of implementing noindex controls for sensitive AI conversation platforms to prevent similar incidents in the future.
As these tools become more integrated with daily online activities, the boundaries between private conversations and public data continue to blur, often without clear user understanding.
References
- https://www.criticalpathsecurity.com/your-public-chatgpt-queries-were-briefly-discoverable-via-google-heres-what-went-wrong/
- https://tech.slashdot.org/story/25/11/09/027213/did-chatgpt-conversations-leak-into-google-search-console-results
- https://www.tracyheatley.com/how-to-find-out-if-your-chatgpt-chats-went-public/
- https://www.prompt.security/blog/everyones-freaking-out-about-google-indexing-chatgpt-chats-should-you-be
- https://ccbjournal.com/blog/chatgpt-users-stunned-as-private-chats-appear-in-google-searches
- https://www.malwarebytes.com/blog/news/2025/08/openai-kills-short-lived-experiment-where-chatgpt-chats-could-be-found-on-google
- https://medial.app/news/oddest-chatgpt-leaks-yet-cringey-chat-logs-found-in-google-analytics-tool-5d6beb11647ee