Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is supposed to be scientific research, not some random malicious actors.


It's research on whether malicious actors could use LLMs to persuade people. If you restrict the tricks it can use, what is the research telling you? How nice, honest people might persuade Redditors by secretly using LLMs? How malicious actors might persuade people if they never lied?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: