Someone uses chatgpt in the most basic way, not harnessing any of the power of asking followup questions even, and complains it's not that great. Yet when reading, I get the feeling that the employee isn't that great.
The employee is the only one saving randomly generated fake facts from becoming government policy documents.
Think about his coworkers that might not be as sharp and just doing what they are told (copy/paste).
We live in a world where barely fact checked AI news is now normal.. the AI can introduce people wrong, use wrong grammar, call vehicles buildings, and even get facts wrong and we all just go about our lives, and think it’s okay that nobody at NewsMegaCorp could even bother to get a teenager to read something before publishing it to the world.
Now we have to worry about our governments running on policies that were created in the same sloppy fashion, because it’s cheaper than people.
I agree that news should not be ai generated, since news should by definition be new and if you need AI to help you with it, you're probably editorializing it to some degree.
Should an info snippet about some currently evolving scenario about disease be gotten from AI? No. But if your boss asks you to use AI, I expect you to find a better use for it than just getting the main blurb.
The employee could have asked the AI about historical info about monkey box over the years past, and how its rise compared to the rise in hiv/aids. Who knows. I'm not in that industry. But the example given in the post is the lamest I could think of.
I think the common dev response is just to say "user error" or RTFM. But ChatGPT is being used by 100 MILLION+ people now, so the likelihood that many of them are using ChatGPT in a stupidly dangerous way is quite huge.