Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's striking how much of the AI conversation focuses on new use cases, while overlooking one of the most serious non-financial costs: privacy.

I try to be mindful of what I share with ChatGPT, but even then, asking it to describe my family produced a response that was unsettling in its accuracy and depth.

Worse, after attempting to delete all chats and disable memory, I noticed that some information still seemed to persist. That left me deeply concerned—not just about this moment, but about where things are headed.

The real question isn't just "what can AI do?"—it's "who is keeping the record of what it does?" And just as importantly: "who watches the watcher?" If the answer is "no one," then maybe we shouldn't have a watcher at all.



> Worse, after attempting to delete all chats and disable memory, I noticed that some information still seemed to persist.

I'm fairly sure "seemed" is the key word here. LLMs are excellent at making things up - they rarely say "I don't know" and instead generate the most probable guess. People also famously overestimate their own uniqueness. Most likely, you accidentally recreated a kind of Barnum effect for yourself.


  Worse, after attempting to delete all chats and disable memory, I noticed that some information still seemed to persist.
Chatgpt was court ordered to save history logs.

https://www.malwarebytes.com/blog/news/2025/06/openai-forced...


That only means that OpenAI have to keep logs of all conversations, not that ChatGPT will retain memories of all conversations.


you could explain that to ChatGPT and it would agree but then again, if you HAVE TO keep the logs ...


> I try to be mindful of what I share with ChatGPT, but even then, asking it to describe my family produced a response that was unsettling in its accuracy and depth.

> Worse, after attempting to delete all chats and disable memory, I noticed that some information still seemed to persist.

Maybe I'm missing something, but why wouldn't that be expected? The chat history isn't their only source of information - these models are trained on scraped public data. Unless there's zero information about you and your family on the public internet (in which case - bravo!), I would expect even a "fresh" LLM to have some information even without you giving it any.


I think you are underestimating how notable a person needs to be for their information to be baked into a model.


LLMs can learn from a single example.

https://www.fast.ai/posts/2023-09-04-learning-jumps/


That doesn’t mean they learn from every single example.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: