Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It may be 3.5-like, however when I tried it, it seems like it has a lot more handcuffs applied for “safety,” and I don’t mean safety from Skynet and Terminators. I mean things like it refuses to speculate about what medical condition might cause some given symptoms. “Sorry, I’m just a language model.” GPT had no problem giving it an educated guess.

I personally find this type of “safety“ to be patronizing and insulting, but I’m sure there are people who would prefer government regulation banning the use of language models for various and sundry “inappropriate” questions in order to protect humans who have no common sense. Anyway, in this condition, Bard’s a no for me.



> I mean things like it refuses to speculate about what medical condition might cause some given symptoms.

Are you sure about this specifically? I’ve recently had zero trouble getting GPT-4 to give potential diagnoses for a given set of symptoms, though perhaps it’s an issue of prompting.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: