Hacker Newsnew | past | comments | ask | show | jobs | submitlogin



Came to post that.

I like her meta observation, that using ChatGPT for 2 years rots your brain so badly you somehow think it's a good idea to write an article like this, with your real name and professional/academic reputation attached to it, and get it published somewhere as high profile as Nature.

Someone on my Mastodon field commented that if they'd done that "you wouldn't be able to torture it out of me" and that they'd never admit it to anyone.


Good commentary, good video. She is a little bit too harsh about the data loss though. The author did not realize that disabling data sharing would delete the history of the already occurred interactions, probably not realizing that everything was stored on an external server. And it's quite possible there was no proper warning about that.

I feel that makes her point weaker. Because she is apart from that completely right: The work practice admitted to here is horrible. It likely includes leaking private emails to an US company and in every case meant the job of teaching and publishing wasn't don't properly, not even close.


First rule of the world: "if you don't understand the implications, don't do it".

How much are we willing to justify every wrong behaviour possible?


You mean toggling the data setting? It's on the program to make implications visible. That's a big part of design for usability. It's possible ChatGPT did that and the user was unexpectably dense, it's more likely the implications were not properly explained/shown. That's why you add undo functionality, which the user even tried to find. Here, given the legal component, an undo available for a short time frame seems like a good fit.

But your comment could equally be about the fact of using chatGPT in the first place for the job, that I wouldn't justify at all.


Usability, UI, that's not the point, my question is just how is it possible that an esteemed academic professional doesn't understand that touching anything that deals with "data" on a service like ChatGPT could possibly result in consequences? And how is it possible that we have started to justify every careless and sloppy behaviour ever? Better not justify sloppiness.


That's absolutely usability. To show what will happen if something like "data" on a service like ChatGPT is touched, to prevent outcomes the user did not want, to prevent accidental data loss through guidance and safeguards. It's usually not sloppyness when users run into those situation, but bad program design. Maybe also here: ChatGPT now does not remove the old interactions when the data consent setting is changed, and shows prominent warnings before the removal of interaction threads, which is a separate option, according to a german article about this (https://www.notebookcheck.com/ChatGPT-Professor-verliert-zwe...). It is likely this was changed in the meantime, to better the usability.

Or the user really was surprised that deleting all interactions meant deleting all interactions. Then your position is a bit more understandable, but even then - mistakes happen and an undo would still be good.


Such a great response to this article. Scathing but enjoyable to watch. Angela is a treasure.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: