Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Some of my colleagues are using ChatGPT and blindly follow its advice. ChatGPT's answers often look more persuasive than human answers on SO.

Still, I find ChatGPT answers much harder to validate. In programming Q&A, the answer is usually a series of calls. When you try to apply the human solution (assuming it's at least somewhat correct), the error often lies in input data mismatch or some minor changes in requirements when the answer is not perfectly aligned. Rarely the calls themselves are wrong - they might be deprecated if it's an old answer, but generally, they do at least exist. With ChatGPT, you never know which particular part of the answer has been hallucinated.

Also, as SO has more stringing question requirements, you are forced to construct a minimal case to reproduce the error. Often, composing an SO question got me straight to the cause of the error.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: