Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The issue with bigger LLM or even MMMs is that the bigger they are the more they are cramming and regurgitating the training data, and that opens up to lawsuits.

Making NNs generalize the way humans do it is still a hard problem.



Is this indeed established ? Could you provide a link or three ?



Thank you. That's certainly interesting.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: