Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"In short, they're paying contract workers for quantity, not quality;"

How do you know this?

Just taking a wild guess, but I'd think a company with billions of funding, and a ton of people trying to find flaws in what they are producing, would have some processes in place to incentivize quality as well as quality.

What you are suggesting is that a company that produces a product that is based on balancing trillions of floating point numbers makes core business decisions in the most simplistic black and white terms. "Hey, lets just go with a one and a zero on this." Bizarre assumption.

Maybe I'm just good at prompting, and I'm not trying to trick it, but I don't see this "superficially convincing bullshit." Can you show me a chat where you have sincerely prompted it and gotten something that matches that description?

I often see responses that are better than I could have given even if given hours to research and compose them, and I'm a pretty good writer and researcher.

Here, since I'm asking you to share one where it fails as you say it does by creating "superficially convincing bullshit", I'll share several where it succeeds.

https://chat.openai.com/share/523d0fec-34d3-40c4-b5a1-81c77f...

https://chat.openai.com/share/e09c4491-fd66-4519-92d6-d34645...

https://chat.openai.com/share/233a5ae2-c726-4ddc-8452-20248e...

https://chat.openai.com/share/53e6bda1-fe97-41ce-8f5c-89d639...

https://chat.openai.com/share/19f80ea9-e6be-4ac3-9dd4-7ea15c...



Never forget that RLHF is driven largely by sweatshop labor:

https://www.washingtonpost.com/world/2023/08/28/scale-ai-rem...

These jobs are overwhelmingly paid by task, which puts a lot of pressure to go fast.

I assert the entire "hallucination" phenomenon is a side effect of these practices. When ChatGPT makes up a fake fact with fake sources to back it up, it's largely because such lies are rated very highly by the underpaid humans who aren't incentivized to follow up on sources.


"I assert the entire "hallucination" phenomenon is a side effect of these practices. When ChatGPT makes up a fake fact with fake sources to back it up, it's largely because such lies are rated very highly by the underpaid humans who aren't incentivized to follow up on sources."

It seems like with billions of investment, they could figure that out. It's commonly discussed as an extremely difficult problem to solve and the most important problem to solve in the most talked about industry on the planet. I'm having a problem believing that its something that is so easy to solve.

Are you suggesting that even with that much money, they have to do things the way things are "overwhelmingly" done, as opposed to being able to say "hey, we need it done this way instead, because it's important and we can pay for it."

It just seems pretty bizarre to think that the highest of high tech, that is massively funded, doesn't have the clout to just fix that in a heartbeat, if that's really where the problem is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: